Copy a ConfigMgr Application DeploymentType

A small function inspired by Fred Bainbridges post on howto append an OS requirement to a deployment type. The purpose of the function is to copy the Deploymenttype within an application, but if someone feels like a spending a few hours to rewrite it to copy between different applications that could possible work aswell.

 

function Copy-CMAppDT {
<#
.SYNOPSIS
Copy a single Deployment Type within an application
.DESCRIPTION
This will create a copy of a DeploymentType, with the lowest priority and the name specified
.EXAMPLE
Copy-CMAppDT -appName "PingKing 2.0.0" -DeploymentType "PingKing 2.0.0" -newDTname "PingKing Updated" -siteCode P01 -siteServer CM01
.EXAMPLE
.PARAMETER appName
This is the name of the configmgr application that has the deployment type. This accepts input from pipeline.
.PARAMETER DeploymentType
This is the name of the Deployment Type that you want to copy.
.PARAMETER newDTName
This is the name of the new DeploymentType.
.PARAMETER siteCode
This the ConfigMgr site code you are working with. Defaults to LAB
.PARAMETER siteServer
This the site server you are going to working with.  WMI calls are made to this server.  It is most likely your primary site server.
#>
[CmdletBinding()]
param (
[Parameter(
Position=0,
Mandatory=$true,
ValueFromPipeline=$true,
ValueFromPipelineByPropertyName=$true)
]
$appName,
$DeploymentType,
$newDTname,
$siteCode = "LAB",
$siteServer = "cm01.cm.lab"
)
begin {
write-verbose "Import module"
import-module 'C:\Program Files (x86)\Microsoft Configuration Manager\AdminConsole\bin\ConfigurationManager.psd1' -force #make this work for you
write-verbose "Connect to Provider and change location"
if ((get-psdrive $sitecode -erroraction SilentlyContinue | measure).Count -ne 1) {
new-psdrive -Name $SiteCode -PSProvider "AdminUI.PS.Provider\CMSite" -Root $SiteServer
write-verbose "Connect to the default scope"
try {
$connectionManager = New-Object Microsoft.ConfigurationManagement.ManagementProvider.WqlQueryEngine.WqlConnectionManager
$connectionManager.Connect($siteServer) | Out-Null
[Microsoft.ConfigurationManagement.ApplicationManagement.NamedObject]::DefaultScope = [Microsoft.ConfigurationManagement.AdminConsole.AppManFoundation.ApplicationFactory]::GetAuthoringScope($connectionManager)
}
catch {
throw-error "$error[0]"
}
}
write-verbose "Set location $sitecode"
set-location $sitecode`:

}

process {
write-verbose "Get Application $appName"
try {
$Appdt = Get-CMApplication -Name $appName
}
catch {
throw "Unable to get $appName - $error[0]"
}

$xml = [Microsoft.ConfigurationManagement.ApplicationManagement.Serialization.SccmSerializer]::DeserializeFromString($appdt.SDMPackageXML,$True)

$numDTS = $xml.DeploymentTypes.count
write-verbose "Number of DT: $numDTS"
$dts = $xml.DeploymentTypes

foreach ($dt in $dts)
{
if ($dt.title -eq $DeploymentType ) {
write-verbose "Found DT $deploymenttype"
$newDeploymentType = $dt.Copy()
write-verbose "Set new DT name $newDTname"
$newDeploymentType.Title = $newDTname
$newDeploymentType.ChangeID()

}
}
if ($newDeploymentType.GetType().name -eq 'DeploymentType') {

write-verbose "New DT created"
$xml.DeploymentTypes.Add($newDeploymentType)

write-verbose "Commit to AppObject"
$UpdatedXML = [Microsoft.ConfigurationManagement.ApplicationManagement.Serialization.SccmSerializer]::SerializeToString($XML, $True)
$appdt.SDMPackageXML = $UpdatedXML
Set-CMApplication -InputObject $appDT
}
else {
write-error "No DeploymentType $newDTname located"
}
}

end
{
write-verbose "Return to c:"
set-location c:
}
}

Boundary Groups and Secondary Sites

After spending a few hours reading about how-to configure Boundaries and Boundary Groups in regards to Secondary Sites in ConfigMgr 2012 I was yet to find something that really made anything explicitly clear. How does a client know that it’s it should be communicating with the Secondary Site?

So far I gathered that Site Assignment can not be conflicting with other boundaries, but Distribution Points can be assigned all over the place.

And you can associate one or more distribution point with each boundary group. You can also add a single distribution point to multiple boundary groups. The default behavior is to choose the closest server from which to transfer the content from. And remember that ConfigMgr 2012 supports that a client is a member of multiple boundary groups for content location, but not for automatic site assignment

From <https://msandbu.wordpress.com/2012/10/05/boundaries-and-boundary-groups/>

To further understand the site assignment I read the most quoted blog all over the internet. Something’s became more clear – such as the fact that the Primary Site should always be used as the Site Assignment for a boundary.

Note that none of this implies that MPs are located using Content Location Boundary Groups, just the fact that a client is within the scope of a secondary. MP retrieval in ConfigMgr 2012 is not based on client location, just site assignment. The above also does not imply that clients will fallback to a primary site if the MP in the secondary site is down; when an MP at a secondary goes down, clients within the scope of that secondary are essentially on an island unless you change the Boundary Groups and wait for their 25 hour re-evaluation cycle or the clients detect a network change.

From <http://myitforum.com/myitforumwp/2012/08/02/secondary-sites-and-boundary-groups/>

Yet another thread provided some insight into that MPs are actually evaluated if they are part provided as a preferred management point.

  • They enable clients to find a primary site for client assignment (automatic site assignment).
  • They can provide clients with a list of available site systems that have content after you associate the distribution point and state migration point site system servers with the boundary group.
  • Beginning with System Center 2012 Configuration Manager SP2, they support management points and can provide clients with a list of preferred management points.

From <https://technet.microsoft.com/en-us/ec3bae17-9b97-42d0-9c23-f634a3665606>

This last quote made it click though… If a boundary group is used for both site assignment and for content location the Management Point (of the Secondary Site) should also be specified in the list of Distribution Points.

Here is the conclusion:

Irrespective of the option “Clients prefer to use management points specified in boundary groups” is selected or not selected, If the hierarchy contains a Secondary Site with multiple Boundary Groups associated with it for site assignment, each Boundary Group “MUST” have the Management Point of that Secondary Site is added.

From <https://blogs.technet.microsoft.com/senthilkumar/2015/08/10/configmgr2012-sp2-r2sp1-preferred-management-points-configuration-and-secondary-sites/>

Well, how does this actually look?

image

Now, this has to be the piece of historic GUI that simply has been left behind. Its ugly, and no one truly gets this. In the above case – a client that is a member of the above Boundary Group will be communicating to the Secondary Site. I wonder what happens if there are conflicts with assigned MPs…?

The check-mark Use this boundary for site assignment has been recommended to separate into a separate boundary group (gives clarity I suppose). Secondary Sites should never be used for site assignment.I can only assume (based on the last quote I posted above) that if a Site Assignment and a Site Server System are separated the addition of both a Secondary Site MP and a local DP into the Site Systems Server-part are not necessary. I haven’t confirmed this though…

Incase you want to see how many clients are assigned to a specific Management Point a splendid fella just posted a simple SQL-query to identify this.

ConfigMgr and a backlog in distributions

Scenario

Do you have a primary site and a few secondary sites in ConfigMgr 2012+?

Do you schedule the legacy Package format to update on a schedule?

image

Do you have a backlog in the distribution manager?

Well, so far this is known (by Microsoft) defect that apparently is yet to be fixed (until 1606 – nothing confirmed beyond that)

Symptoms

If you review the database where ConfigMgr resides you can see that there is a constant growing amount of DistributionJobs. Sample query to get an overview;

use <database>
select COUNT(*) from distributionjobs

The problem grows the more packages you have set to update on a schedule. The frequency of the schedule is not relevant, the package will loop into a forever updating loop. Most likely the primary site will handle this efficiently, however the sending to secondary sites will cause a backlog that is not just an annoyance but causing severe problems as the backlog will continue to grow.

Repeating this: The frequency of the schedule is not relevant. Just check the above checkbox and the issue will occur.

SQL query to locate relevant packages

use <database>
select pkg.PkgID, pkg.Manufacturer, pkg.Name, pkg.Version, pkg.Language, pkg.RefreshSchedule from SMSPackages as pkg
where datalength(pkg.RefreshSchedule) !=0

Fixit

Easy – uncheck all these check-boxes that updates packages. If you still want to update packages on a schedule use a powershell script to trigger the update and use the task scheduler to run the update.

Run the command-line;

powershell -executionpolicy bypass -file SCCM.UpdatePkg.ps1 -packageid <PACKAGEID>

Code:
(I honestly don’t know if I have stolen / copied this from somewhere – if I have give me a ping and I will remove this)

#========================================================================
# Created on: 2014-10-28 15:06
# Created by: Nicke Källén
# Organization: Applepie.se
# Filename: SCCM.UpdatePkg.ps1
#========================================================================
Param(
[Parameter(Mandatory=$True,Position=1)]
[string]$packageid
)

Function Invoke-CMPackageUpdate
{
[CmdLetBinding()]
Param(
[Parameter(Mandatory=$True,HelpMessage="Please Enter Primary Server Site code")]
$SiteCode,
[Parameter(Mandatory=$True,HelpMessage="Please Enter Primary Server Name")]
$SiteServer,
[Parameter(Mandatory=$True,HelpMessage="Please Enter Package/Application ID")]
$PackageID
)

Try{
$PackageClass = [wmiclass] "\\$($siteserver)\root\sms\site_$($sitecode):SMS_Package"
$newPackage = $PackageClass.CreateInstance()

$newPackage.PackageID = $PackageID

$newPackage.RefreshPkgSource()
}
Catch{
$_.Exception.Message
}

}

Invoke-CMPackageUpdate -SiteCode <SITECODE> -SiteServer <SERVER> -PackageID $packageid

Quick test of WDS

Just a quick-test of a TFTP server – just to validate if it is responsive…

These commands be run from any client (screenshots are from Win7)

Step 1

Install the TFTP Client

image

Step 2

Run the command in a folder where you have permissions to write in

tftp -1 <servername> GET \boot\x86\wdsnbp.com

If the TFTP-client is not installed the below error message will be received

image

If it is successfull, you will have downloaded a small file

image

Parallels Software Update Point–selfsigned certificate

As a continuation of the previous post on howto setup the Parallels Software Update Point (introduced in Parallels Mac Management for SCCM 4.5) – here comes a minor hack howto enable WSUS for selfsigned certificates and leverage this within Parallels SUP

Step 1.

Enable Selfsigned certificates for WSUS

Set the following registry key

HKLM\Software\Microsoft\Update Services\Server\Setup
DWORD: EnableSelfSignedCertificates – 1

Step 2

Open certmgr.msc where WSUS is installed and export the WSUS selfsigned certificate

Export the WSUS Publishers Self-signed certificate from Trusted publishers to a file. Remember to choose to export the private key…

image

…and all the extended properties…

image

… and set a password…

image

Step 3

Run some code provided by Parallels to set the certificate you just exported as the signing certificate. Replace CERTFILE and CERTPW

[Reflection.Assembly]::LoadWithPartialName("Microsoft.UpdateServices.Administration")

$updateServer = [Microsoft.UpdateServices.Administration.AdminProxy]::GetUpdateServer()

$config = $updateServer.GetConfiguration()

$config.SetSigningCertificate("CERTFILE", "CERTPW")
$config.Save()

Step 4.

Complete the setup wizard. As you already followed all the previous steps.

image

Software Center can not be loaded

Regardless of what version of the ConfigMgr agent (2012 –> 1602) you get – there still seems to be a possibility to have left-overs from ConfigMgr 2007.

within the SCClient log-file the following error would be generated;

Exception Microsoft.SoftwareCenter.Client.Data.WmiException: Provider load failure       (Microsoft.SoftwareCenter.Client.SingleInstanceApplication at OnGetException)

The following is presented to the user when starting Software Center

image

Software Center can not be loaded. There is a problem loading the required components for Software Center.

It seems that this is due to a reference no longer in use – the dcmsdk.dll, located under SysWOW64 (on 32-bit systems). Sample output using reg query:

HKEY_LOCAL_MACHINE\Software\Wow6432node\classes\CLSID\{555B0C3E-41BB-4B8A-A8AE-8A9BEE761BDF}
(Default)    REG_SZ    Configmgr Desired Configuration WMI Provider

HKEY_LOCAL_MACHINE\Software\Wow6432node\classes\CLSID\{555B0C3E-41BB-4B8A-A8AE-8A9BEE761BDF}\InProcServer32
(Default)    REG_SZ    C:\WINDOWS\SysWOW64\CCM\dcmsdk.dll

End of search: 2 match(es) found.

Fix? Delete the registry key – sample command line;

reg delete HKLM\Software\Wow6432node\classes\CLSID\{555B0C3E-41BB-4B8A-A8AE-8A9BEE761BDF} /f

App-V 5 and publishing error code: 040000002C.

A minor defect that causes a publishing failure for any packages (only tested for publishing towards a user though. The error code looks like this;

Publish-AppvClientPackage : Application Virtualization Service failed to
complete requested operation.
Operation attempted: Publish AppV Package.
AppV Error Code: 040000002C.
Error module: Virtualization Manager. Internal error detail: 4FC086040000002C.

There seems to already be a few discussions online that assists in resolving the with a few different methods – one seems to suggest to delete a registry key and there is a one that contains a more granular approach by resetting the registry values under LocalVFSSecuredFolders.

A correct view is that each SID under this registry key references the %USERPROFILE%.

image

and incorrect (and the cause of the error) references the Default-user profile

image

A quick script (which you can wrap in a Compliance Item or a script – or whatever the preference is..) to remediate this. The actual fix (Set-ItemProperty) is prefixed with # – please test it before you deploy it.

$users=@()
$return = 0
$users = ($k = gi HKLM:\SOFTWARE\Microsoft\AppV\client\Virtualization\LocalVFSSecuredUsers).GetValueNames() | % {

New-Object PSObject -Property @{

Name = $_

Type = $k.GetValueKind($_)

Value = $k.GetValue($_)

} | select Name, Type, Value

}

foreach ($u in $users) {
if ($u.value -eq 'c:\users\Default\AppData\Local\Microsoft\AppV\Client\VFS') {
$return = 1
#Set-ItemProperty -Path 'HKLM:\SOFTWARE\Microsoft\AppV\client\Virtualization\LocalVFSSecuredUsers' -Name $($u.Name) -value '%USERPROFILE%\AppData\Local\Microsoft\AppV\Client\VFS'
}
}
$return

 

 

Per the above forum post this should be resolved within App-V 5.0 SP3, however I have still seen minor occurances for later releases – so I wouldn’t call that a  confirmed fix.

Office 365 and its import service

Office 365 is the cloud service with a major adoption. One part of this is getting the on-premises Exchange-servers to be removed, and instead leveraging the Outlook Online provided service. The typically increase in allowed mailbox size is a big selling point, but additional benefits are added every day.

Migration of PST

The increased mailbox size does start the discussion of howto eliminate all PST-files spread among all the local client harddrives and the file-servers in an organization. Microsoft has offered the PST Capture tool (scan all devices, locate all PSTs and import them), and as of last year (2015) the Import PST Files to Office 365 was a way to allow sysadmins to perform a more controlled (batch) upload of files.   As always, the end-users can migrate the data via Outlook.

None of these ways are “great”. The PST Capture Tool has a great process of collecting files and dumping them, but its essentially a gather tool that will without any intelligence of what the user wants and then dump anything it can find into a mailbox.

The newly arrived Import PST file-service is a batch-management tool that seems to target the admins that has a bunch of files at a single time (or potentially a few times) to upload. Options are either to upload this into an Office 365 managed Azure Storage Space or simply ship a hard-drive with a collection of files.
There are a few people who have explained that the Office 365 Import service has a powershell interface, unfortunately its not documented and Microsoft support does not acknowledge that it exists.

In addition to the the above options provided by Microsoft there are a few third-party options – such as MessageOps.

End-user driven migration

To make something that is end-user friendly a bit of automation is required of the above tools. Currently a dropbox (a folder where users can dump the PST-files) has been designed, however inorder to get that working there quite a few hurdles that someone has to overcome inorder to produce any type of favourable result. The below notes are for my own memory…

Office 365 Import Service

As the Office 365 Import Service is a Microsoft supported tool it was it was considered the most reliable option of all of the above. Requiring Outlook for end-users to migrate the data seemed to be a high-cost and not very friendly solution. The PST Capture tool were most likely to migrate data which wasn’t relevant to the user, and the risk was of course that something was missed in the process. Third-party options inccured additional licensing cost (ontop of any Office 365 licensing) and was therefore discarded early in the process.

Account requirements

To initiate anything for the Office 365 PST Import service you are required to have certain permissions, aswell as only leveraging simple authentication if there is an intent to automate the process. If any type of multifactor authentication is enabled the ability to connect to the Office 365 via Powershell session is disabled.image

Permissions

A service account has to be setup that has the role Mailbox Import Export assigned to it. This isn’t directly granted any set of permissions so its recommended to create a new group, assign the role to it and make the service-account a member of the group.

It also seems that to be able to access the Office 365 PST Import Service from the portal one has to be a Global Admin aswell. Powershell cmdlets are only available once the Role Mailbox Import Export has been assigned.

Storage (Azure)

Office 365 Import Service offers to setup a Azure Storage Space for the tenant, and will provide the Shared Access Signature required to upload files (if using the network upload) or a storage key if using the option to shipping a harddrive. To leverage any kind of automation the only accessible path for the Office 365 PST Import service is the blob located on an Azure Storage Space. It seems that sometime in March (2016) the previous method of using the storage key to generate a Shared Access Signature (SAS) to allow for read-operations for the Import Service (technically this is performed by the Mailbox Replication Service provided by Office 365) was discontinued. One can find a storage key for the option to send in a hard-drive, however that seems to not leverage the same upload space as the network upload and therefore the storage key can’t be used.

Fortunately enough the option is only requiring an Azure Storage Space which can be provided via a normal Azure-subscription. Setup a Blob in an Azure Storage Space, and immediately you have access to the storage space. Once the Storage Space is setup you can retrieve the storage key by locating the key-icon

image

There will be two keys which you can leverage.

image

To generate a Shared Access Signature (needed for the automation part) you can download the Azure Storage Explorer 6. The tool allows a quick and easy way to view whats in a Blob on the Storage Space, aswell as generating the SAS-key.

Once you start the tool choose to add your storage space with the storage key above. Remember to check HTTPS.

image

Once you have connected to your Storage Space, choose to create a new Blob (with no anonymous access). Once this is created you can press Security to start generating the SAS-key.

image

Generate the signature by selecting a start-date (keep track of what timezone you are in) and the end-date. These dates will set the validity for the period of your SAS-key. Remember to define the actions you want to allow. To upload files you need to allow write, and to use the SAS-key for importing the files you need to allow read. There is the possibility to generate multiple SAS-keys and use them for different parts of the process.

image

A SAS-key are built of multiple parts – here comes a brief explanation;

#sv = storage services version; 2014-02-14
#sr  = storage resource; b (blob), c (container)
#sig = signature
#st = start time; 2016-02-01T13%3A30%3A00Z
#se = expiration time; 2016-02-09T13%3A30%3A00Z
#sp = permissions; rw (read,write)

 

Sample:

?sv=2012-02-12&se=9999-12-31T23%3A59%3A59Z&sr=c&si=IngestionSasForAzCopy201601121920498117&sig=Vt5S4hVzlzMcBkuH8bH711atBffdrOS72TlV1mNdORg%3D

 

Copy files to Azure Storage Space

There are multiple ways to copy files to any Azure Storage Space. You can use the Azure Storage Explorer 6 that was used to generate the SAS-key. Someone has provided a GUI for AZCopy command-line tool, but for automation the command-line usage for AZCopy is the route to go. Microsoft has written an excellent guide for this which doesn’t need any additional information.

Connecting to Office 365

Managing Office 365 for any type of automated manage is performed via a PSSession (PowerShell Session). A PSSession will import all available cmdlets from Office 365. As you can imagine quite a few are similar to Exchange, and it may therefore provide some overlap. To avoid confusion the recommended approach is to append a prefix for all cmdlets from the session which can be defined when the session is imported. This is a sample script that will provide the username and password that is required to connect to, configure the proxy-options for the Powershell session and setup the session with O365.

$password = ConvertTo-SecureString "password" -AsPlainText -Force
$userid = "name-admin@company.onmicrosoft.com" 
$cred = New-Object System.Management.Automation.PSCredential $userid,$password 
$proxyOptions = New-PSSessionOption -ProxyAccessType IEConfig -ProxyAuthentication Negotiate -OperationTimeout 360000
$global:session365 = New-PSSession -configurationname Microsoft.Exchange -connectionuri https://ps.outlook.com/powershell/ -credential $cred -authentication Basic -AllowRedirection -SessionOption $proxyOptions
Import-PSSession $global:session365 -Prefix  O365

 

Once the session is started the modules are imported with the prefix O365, as an example commands go from:

Get-Mailbox

to

Get-O365Mailbox

 

Using the Import-service via Powershell

As noticed the Office 365 Import service is a GUI only approach that is not supported for automation. That beeing said there are options to start this via Powershell. Multiple blog-posts are documenting the New-MailboxImportRequest cmdlet (and with the prefix its now: New-O365MailboxImportRequest), however Microsoft support will barely acknowledge its existance.

As long as you have the previous stated account permissions assigned (Mailbox Import Export Role) the cmdlet will be available and can be used.

For Office 365 the only supported source is an Azure Storage Space. The import-service is creating one for you, however today (2016-05-12) we are unable to create the Shared Access Signature to allow the automation part use that Storage Space. January 2016 this doesn’t seem to be the case and therefore we can assume that potentially this will change in the future.

Below command-line will allow you to start an import. If you receive the error 404 most likely there is an bad path to the file, and a result of 403 most likely is a bad SAS-key.

Remember: The O365 is the prefix we choose to use when running Import-PSSession. The actual cmdlet is New-MailboxImportRequest

New-o365MailboxImportRequest -Mailbox user@mailbox.com -AzureBlobStorageAccountUri https://yourstorage.blob.core.windows.net/folder/User/test.pst -BadItemLimit unlimited -AcceptLargeDataLoss –AzureSharedAccessSignatureToken “?sv=2012-02-12&se=9999-12-31T23%3A59%3A59Z&sr=c&si=IngestionSasForAzCopy201601121920498117&sig=Vt5S4hVzlzMcBkuH8bH711atBffdrOS72TlV1mNdORg%3D" -TargetRootFolder Nameoffolderinmailbox

 

Retrieving statistics

Once the import is started it fires off and actually goes through pretty quickly. As you can imagine the results can be retrieved by using Get-O365MailboxImportRequest and Get-O365MailboxImportRequestStatistics. One oddity was that the pipe of passing on Get-O365MailboxImportRequest to the Get-O365MailboxImportRequestStatistics didn’t work as expected. Apparently the required identifier is named Identity and it actually requests the RequestGuid.

Sample loop;

$mbxreqs = Get-O365MailboxImportRequest
foreach ($mbx in $mbxreqs) {
$mbxstat = Get-O365MailboxImportRequestStatistics -Identity $mbx.RequestGuid
$mbxstat | Select-Object TargetAlias,Name,targetrootfolder, estimatedtransfersize,status, azureblobstorageaccounturi,StartTimeStamp,CompletionTimeStamp,FailureTimeStamp, identity
}

The above data are things which was useful for a brief overview. Sometimes you can manage with the Get-O365MailboxImportRequest.

Cleanup of Azure Storage Space

What does not happen automatically (well, nothing in this process happens automatically) is the removal of the PST-files uploaded to the Azure Storage Space. Having the users PST-files located in a Storage Space will consume resources (and money), aswell as the user might be a bit uncomfortable about it. As always the attempt is to automate this process. To retrieve the cmdlets for managing the Azure Storage Space (remember, multiple ways to handle this. AZCopy is a single-purpose tool) you need to download Azure Powershell. Microsoft again has an excellent guide howto get started. What would be even faster is if all these services could provide a common approach of management. For Office 365 you import a session, but for Azure you download and install cmdlets?

Once the Azure Powershell cmdlets are installed you can easily create a cleanup job that will delete any file older than 15 days. First a time is defined. Secondaly we setup a connection to the Azure Storage Space (New-AzureStorageContext), and then we retrieve all files in our specific blob, filter based on our time-limit and then start removing them.

Good to know: Remove-AzureStorageBlob does accept –Whatif. However, –Whatif will still execute the remove. Test your code carefully… Most likely this is true for many other cmdlets.

[datetime]$limit = (Get-Date).AddDays(-15)
$context = New-AzureStorageContext -StorageAccountName $straccountname -StorageAccountKey $straccountkey -ErrorAction Stop

Get-AzureStorageBlob -Container $strblob -blob *.pst -Context $context | Where-Object { $_.LastModified -lt $limit } | ForEach-Object {Remove-AzureStorageBlob -Blob $_.Name -Container $strblob -Context $context}

 

Summary

A long rant that haven’t given anything to you. To be honest – this is memory notes for myself. The parts that are involved in creating an automated workflow requires a lot of moving bits and pieces that utilizes what a common-man would define as the cloud. The cloud is several messy parts that aren’t polished, not well documented, always in preview (technical preview, beta, early release, not launched..) and constantly changing.

All of the above are things that provided a bit of struggle. Most likely the struggle is due to lack of insight into a few of the technologies, and as more insight was gained the right questions were asked. If you read all of the links above carefully you will most likely see a few comments from me.

Parallels Mac Management for SCCM–Software Update Point

As of Parallels Mac Management 4.5 there are great new features – such as the new role Software Update Point.

The addition of this role is to enable managed updates for the OSX-devices within your environment and it acts as a bridge between the Apple Software Update Server (or the able service) and the Configuration Manager environment which PMM integrates into. All of these products will now integrate in a (sort of) seamless way and PMM can now enable its new role (PMM SUP) to inject updates into Microsoft WSUS, which ConfigMgr then uses to publish content. The Apple SUS is optional and if one is setup you can leverage this to further control updates.

Most of this knowledge is based on the Parallels Mac Management for SCCM (bad acronym right there..) Admin guide

Prerequisites

To start using the PMM for SCCM Software Update Point it is required to have a Microsoft WSUS server installed and leveraged for the ConfigMgr environment. Most likely this is already in place if you are already managing updates for Windows-devices.

Allow locally published content

It is required to configure clients to trust locally published content from WSUS. Complete instructions are available from Microsoft, however a quick way to verify if this is setup is to check the following registry key.

HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\WindowsUpdate
AcceptTrustedPublisherCerts – 1

image

In addition the signing certificate setup for WSUS needs to be trusted by the client.

WSUS server

The PMM for SCCM SUP should be installed on server that has WSUS installed (top server in a hierarchy). Before completing that installation a few things needs to be verified. A service account has to be used and configured for the SUP-role, and this has to be a member of the local administrator-group on the server. In addition the service account has to be a member of the local group WSUS Administrators.

image

Choose Update Server

There are three options when choosing what type of source the Apple Updates should be retrieved from. Basis are:

  • Apple Software Updates (public)
    Users can choose what updates to install, able to postpone installation and restarts.
    Updates will be downloaded from Apple
    • Local Update Server (intranet source)
      Users can choose what updates to install, able to postpone restarts
    • Local Update Server (intranet source) – filtered
      Administrators deploys updates

My personal preference is the Apple Software Updates, but incase you want to avoid WAN traffic and potentially more control of updates for your devices the option is a local Apple Software Update Server (or – Local Update Server as stated above). The Apple Software Update Server is part of OSX Server (which can be purchased from Apple Store). Like all other things – this role can be enabled and setup pretty easily. However, it does require an OSX-instance that is running as a server in your environment.

Apple Software Update Server

Once the Apple Software Update Server is setup the PMM for SCCM SUP needs to be configured to direct all requests to this server. A simply registry key edit will finalize the configuration.

These items are only requried to change if you want to use the Apple SUS.

Node: HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\pmm_sup_service\Pa rameters

Server Address: SusCatalogBaseUrl
Port: HttpServerPort.
Server settings update interval: InfoUpdateIntervalSeconds
Catalog check: CatalogRefreshIntervalSeconds

The log-file for any activity is generated in the following log-file;

%Windir%\Logs\pmm\pmm_sup_service.log

ConfigMgr configuration

ConfigMgr needs to be configured to synchronize the new Apple Updates. First, update Classifications on the synchronizations properties of the ConfigMgr Software Update Point.

image

In addition the Apple product needs to be selected.

image

Client settings

If one is using the public Apple Software Updates there isn’t a need to configure the PMM for SCCM agent as the agent is set to use this source by default. There are three options that can be configured in the following options file:

/Library/Preferences/com.parallels.pma.agent.plist

This matches the previous suggested routes;

0 — Apple Software Update server (default).
1 — Local update server.
2 — Local update server with selected updates.

Set the option :SuCatalogMode to the desired choice in case you need to update it. PMM has realized that their provided Configuration Items are sub-par so in the admin-guide there are script examples (page 134-135) that you can use to create your own Configuration Items.

Summary

Within an environment that already has ConfigMgr, WSUS and PMM setup – the addition of PMM for SCCM SUP isn’t a lot of extra work to enable management of OSX Updates.

VMware Horizon View Client–silent install

A few short notes on howto silently deploy the VMware Horizon View Client 4.0.1.

The latest release can be downloaded from VMware

 

File: ‘VMware-Horizon-Client-x86_64-4.0.1-3698521.exe’

Intent is to have no desktop-shortcut, enable all features, set a default server and have the user automatically login.

Parameters:

'/s /v"/qn 
REBOOT=ReallySuppress 
ADDLOCAL=ALL 
DESKTOP_SHORTCUT=0 
STARTMENU_SHORTCUT=1 
VDM_SERVER=vdi.site.com 
LOGINASCURRENTUSER_DEFAULT=1 
LOGINASCURRENTUSER_DISPLAY=1"'

Registry update:

'HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432node\VMware, Inc.\VMware VDM\Client\Security'
Name: 'LogInAsCurrentUser' Value: 1 Type: DWORD