Deploying Windows 10 with System Center Configuration Manager (SCCM)

There are a number of different ways Configuration Manager can be used to Deploy Windows 10.

  • In-place Upgrade: Windows 7, 8, 8.1, or 10 to the latest version. The upgrade process retains the applications, settings, and user data on the computer.
  • Refresh an existing computer (Wipe and Transfer Settings): Wipe an existing computer and installs a new operating system on the computer. You can migrate settings and user data after the operating system is installed.
  • Bare Metal Install on a New Computer: Install Windows on a new computer.
  • Replace an existing computer and transfer settings: Install Windows on a new computer. Optionally, you can migrate settings and user data from the old computer to the new computer.

https://i0.wp.com/www.systemcenterdudes.com/wp-content/uploads/2016/02/11924-01.png?resize=704%2C652&ssl=1

In-place Upgrade

This method is used to upgrade Windows 7, 8, 8.1 to 10. You can also do build-to-build upgrades of Windows 10 such as 1607 to 1709. Additionally, starting in Configuration Manager 1802, the Windows 10 in-place upgrade task sequence supports deployment to internet-based clients managed through the cloud management gateway (CMG). This method is the most robust and has no external dependencies, such as the Windows ADK

With the in-place upgrade you:

  • CANNOT change domain membership
  • CANNOT change disk partitions
  • CANNOT change architecture (x86 to x64)

Preparing the Upgrade Package

  1. Add operating system upgrade packages to Configuration Manager (the extracted ISO path)

    Image result for Operating System upgrade packages.
  2. Distribute operating system images to a distribution point
  3. Apply software updates to an operating system upgrade package

    Image result for Operating System Upgrade Packages group, click Schedule Updates

Create an In-place Upgrade Task Sequence

  1. Create a New Task Sequence -> Upgrade an operating system from upgrade package

  2. Select he Upgrade Package from the previous step
  3. Set Product key, Include Updates and Install Applications preferences
  4. Complete the wizard
  5. Add steps to get the previous OS in an upgrade state, such as

    1. Battery checks
    2. Network/wired connection checks
    3. Remove incompatible applications
    4. Remove incompatible drivers
    5. Remove/suspend third-party antivirus.

Pre-Cache Content on Client

You can optionally pre-deploy the Upgrade Package to clients so that they do not have to download it if they click to install it in Software Center. This is called pre-cache of the content.

Download Package Content Step

You can also use the Add Download Package Content step to customize such things as detect the client architecture, hardware type for each driver package.

Deploy the Task Sequence to Computers

There are two ways in which you can deploy this task sequence to computers

  • Use Software Center to deploy over the network
  • Use USB drive to deploy without using the network
  • Cloud Management Gateway (1802 and above)

Deploy with Software Center

  1. Task Sequence -> Deploy
  2. Set Deployment Settings, Scheduling, User Experience, Distribution Points
  3. With 1802 you can now also choose to save this deployment as a template.

Deploy with USB Drive

  1. Select your task sequence and choose Create Task Sequence Media

Deploy with Cloud Management Gateway (CMG)

Starting with Configuration Manager 1802, you can use a CMG to deploy a task sequence.

  1. Ensure all of the content referenced by the in-place upgrade task sequence is distributed to a cloud distribution point.
  2. When deploying the task sequence, check the additional options:

    1. Allow task sequence to run for client on the Internet, on the User Experience tab of the deployment.
    2. Download all content locally before starting task sequence, on the Distribution Points tab of the deployment.
    3. Other options such as Download content locally when needed by the running task sequence do not work in this scenario.
    4. Pre-download content for this task sequence, on the General tab of the deployment

Refresh an Existing Computer (Wipe and Transfer Settings)

Let’s look at how we can partition and format (wipe) an existing computer and install a new operating system onto it and transfer the settings as well. We need to install a state migration point to store and restore user settings on the new operating system after it is installed. This is called a Refresh

Prepare for a Refresh Scenario

There are several infrastructure requirements that must be in place before we can deploy operating systems with Configuration Manager.

Dependencies External to Configuration Manager

  • Windows ADK for Windows 10
    • User State Migration Tool (USMT) (transfer user settings)
    • WinPE images (PXE boot media)
  • Windows Server Update Services (WSUS) (for updates during deployment)
  • Windows Deployment Services (WDS) (PXE boot environment)

    • Also DHCP enabled
  • Internet Information Services (IIS) on the site system server
  • Device drivers ready

Configuration Manger Dependences

  • OS Image
  • Driver catalog (import the device driver, enable it, and make it available on a distribution point)
  • Management point
  • PXE-enabled distribution point
  • Install a State Migration Point and configure it

Prepare WinPE Boot Image

  1. Two default images are provided by Configuration Manager in \\servername>\SMS_<sitecode>\osd\boot\<x64> or <i386>
  2. Add Boot Image (Operating Systems -> Boot Images)

  3. Distribute Boot Images to Distribution Points
  4. Boot Image Properties -> Data Source tab, select Deploy this boot image from the PXE-enabled distribution point

Prepare an Operating System Image

  1. Decide to use the default install.wim or capture a reference computer and make your own .wim file.
  2. Add the Operating System Image to Configuration Manager (Operating Systems -> Add Operating System Image)

  3. Distribute the Operating System to a Distribution Point
  4. Schedule Software Updates to the Operating System Image

    Image result for Operating System Upgrade Packages group, click Schedule Updates

Create a Task Sequence

  1. Create a New Task Sequence -> Install an existing image package

  2. Choose setting such as Image package (and image), partitions, product key
  3. Choose to Join A Domain
  4. Choose to install the Configuration Manager Client
  5. Choose Under State Migration

    1. Capture user settings: capture the user state.
    2. Capture network settings: Captures network settings
    3. Capture Microsoft Windows settings: Capture the computer name, registered user and organization name, and the time zone settings.

Deploy the Task Sequence

You have a few different options to choose when deploying the task sequence.

  • Use PXE to deploy over the network
  • Use Multicast to deploy over the network
  • Use Software Center to deploy over the network
  • Create an image for an OEM in factory or a local depot

    1. Create Task Sequence Media
  • Use USB Drive to deploy Windows over the network

Bare Metal Install on a New Computer

Let’s look at how we can partition and format (wipe) a new computer and install a new operating system onto it.

Prepare for a Bare Metal Scenario

There are several infrastructure requirements that must be in place before we can deploy operating systems with Configuration Manager.

Dependencies External to Configuration Manager

  • Windows ADK for Windows 10
    • User State Migration Tool (USMT) (transfer user settings)
    • WinPE images (PXE boot media)
  • Windows Server Update Services (WSUS) (for updates during deployment)
  • Windows Deployment Services (WDS) (PXE boot environment)

    • Also DHCP enabled
  • Internet Information Services (IIS) on the site system server
  • Device drivers ready

Configuration Manger Dependences

  • OS Image
  • Driver catalog (import the device driver, enable it, and make it available on a distribution point)
  • Management point
  • PXE-enabled distribution point

Everything else here is the same as a Refresh. Follow the directions above, skipping the sections where we would use the State Migration Point.

Replace an existing computer and transfer settings

Let’s look at how we can partition and format (wipe) a destination computer and install a new operating system onto it and transfer the settings as well from a source computer. We need to install a state migration point to store and restore user settings on the new operating system after it is installed. This is called a Refresh

Prepare for a Replace Scenario

There are several infrastructure requirements that must be in place before we can deploy operating systems with Configuration Manager.

Dependencies External to Configuration Manager

  • Windows ADK for Windows 10
    • User State Migration Tool (USMT) (transfer user settings)
    • WinPE images (PXE boot media)
  • Windows Server Update Services (WSUS) (for updates during deployment)
  • Windows Deployment Services (WDS) (PXE boot environment)

    • Also DHCP enabled
  • Internet Information Services (IIS) on the site system server
  • Device drivers ready

Configuration Manger Dependences

  • OS Image
  • Driver catalog (import the device driver, enable it, and make it available on a distribution point)
  • Management point
  • PXE-enabled distribution point
  • Install a State Migration Point and configure it

Configure State Migration Point

The scenario for a replace is similar to a restore. The exception is we need to configure the State Migration Point with settings to assure we have a spot to store our migration data that is not on the computer.

http://apprize.info/microsoft/system/system.files/image288.jpg

We must specify:

  1. The drive on the server to store the user state migration data.
  2. The maximum number of clients that can store data on the state migration point.
  3. The minimum free space for the state migration point to store user state data.
  4. The deletion policy for the role. Either specify that the user state data is deleted immediately after it is restored on a computer, or after a specific number of days after the user data is restored on a computer.
  5. Whether the state migration point responds only to requests to restore user state data. When you enable this option, you cannot use the state migration point to store user state data.

Deploying Windows 10 with Microsoft Deployment Toolkit (MDT)

Microsoft Deployment Toolkit is a collection of tools for automating desktop and server deployments. MDT performs deployments by using the Lite Touch Installation (LTI), Zero Touch Installation (ZTI), and User-Driven Installation (UDI) methods. Only MDT is used in LTI deployments, while ZTI and UDI deployments are performed using MDT with Configuration Manager.

Once downloaded, you will find the tools inside C:\Program Files\Microsoft Deployment Toolkit\. It then creates the deployment share at C:\DeploymentShare which mimics the Deployment Workbench:

Deployment Workbench

The Deployment Workbench is an MMC snapin and will be the main area used to configure the deployment.

Configure MDT to Create the Reference Computer

  1. Download Windows ADK
  2. Open the Deployment Workbench (DeploymentWorkbench.msc)
  3. Import the Operating System (.ISO)

  4. Update any required Out-of-Box Drivers or Packages
  5. Create a Task Sequence

  6. Update your deployment share

  7. The Deployment Workbench creates the C:\DeploymentShare\Boot\LiteTouchPE_x64.iso and LiteTouchPE_x64.wim files

Deploy Windows / Capture Image of Reference Computer

You can burn C:\DeploymentShare\Boot\LiteTouchPE_x64.iso to a DVD or you can add it to Windows Deployment Services, which is a server role that gives you the ability to deploy Windows through PXE.

Now boot your reference computer with LiteTouchPE_x64.iso

Once Windows PE boots up, go ahead and choose your Task Sequence that you created earlier

Since you are deploying this from the original operating system, out-of box drivers, applications, etc. you really want to capture a .WIM of this deployment and bring it back into the Deployment Workbench to deploy again so that it is all compacted into one neat file.

The task sequence deployment will begin

Now there will be a .WIM file of this Task Sequence deployment process ready for you to import into the Deployment Workbench.

Configure MDT to Deploy Windows to the Target Computers

During this task sequence process you chose to capture an image of your reference computer. Microsoft recommends you do this. Once you capture the image, you will have a .WIM file that you can import back into Deployment Workbench and start he whole process again (import WIM, create task sequence, boot PE, etc.) to finally deploy to your target computers

  1. Add the captured image of the reference computer to the Deployment Workbench



  2. Create a Task Sequence

  3. Again, boot from LiteTouchPE_x64.iso and this time choose your new Task Sequence you created for this WIM

Deciding to Use the Default Image or a Captured Image

Default image

The default image install.wim is included with the Windows ISO. This image is a basic operating system image that contains a standard set of drivers.

  • Advantages
    • The image size is smaller than a captured image.
    • Installing apps and configurations with task sequences is dynamic
  • Disadvantages
    • Takes more time

Captured image

With a customized image, you build a reference computer, install apps and configure settings. Then, you capture the image from the computer as a WIM file.

  • Advantages
    • The installation can be faster than using the default image
  • Disadvantages
    • Not dynamic
    • OS install portion takes longer

Windows 10 Deployment Solutions and Tools

Windows AutoPilot

Windows AutoPilot automates the process of setting up and configuring Windows 10 on new devices. It can also be used to reset, repurpose and recover devices. Windows AutoPilot joins devices to Azure Active Directory (Azure AD), optionally enrolls into MDM services, configures security policies, and sets a custom out-of-box-experience (OOBE) for the end user.

You can use Windows AutoPilot to configure the Out of Box Experience (OOBE), which includes automatic enrollment that enrolls devices in Intune.

Login URL: https://portal.azure.com/

First, create a new Windows AutoPilot Deployment Program profile in intune:

Then, find the devices you want the profile enabled for and assign the profile to those devices.

Windows Analytics

Windows Analytics is a set of solutions that run on Operations Management Suite (OMS):

  • Device Health
  • Update Compliance
  • Update Readiness

Devices report telemetry data and this data can be accessed and analyzed by one of these solutions. Generically, Telemetry is an automated communications process by which measurements and other data are collected at remote or inaccessible points and transmitted to receiving equipment for monitoring.

Operations Management Suite dialog showing settings icon (a gear) in the title bar indicated by a red box.

Windows Analytics: Upgrade Readiness

The Upgrade Readiness is a free tool for Azure subscribers that helps you confirm applications and drivers are ready for a Windows 10 upgrade. The tool provides application and driver inventory, information about known issues, troubleshooting guidance, and per-device readiness. Upgrade Readiness was previously called Upgrade Analytics. Further, the Application Compatibility Toolkit (ACT) was replaced with Upgrade Analytics.

Upgrade Readiness works by forming a connection between your computers and Microsoft. Upgrade Readiness collects computer, application, and driver data for analysis.

Upgrade Readiness is offered as a solution in the Microsoft Operations Management Suite (OMS), a collection of cloud based services for managing your on-premises and cloud environments.

Login URL: https://www.microsoft.com/en-us/WindowsForBusiness/windows-analytics

To launch the upgrade readiness process, run the Upgrade Readiness script on each computer that you would like to run the readiness for. There is a pilot and deployment. Run the pilot on a couple machines to verify things are working, then run the deployment in your environment.

Once the script is run, you can identify and resolve issues in the Upgrade Readiness dashboard. By connecting Upgrade Readiness to Configuration Manager, you can directly access the data in the Monitoring node of the Configuration Manager console.

Windows Analytics: Update Compliance

Update Compliance helps keep Windows 10 devices secure and up-to-date using Microsoft Operations Management Suite (OMS) Logs and Analytics to provide information about the status of monthly quality and feature updates.

Windows Analytics: Device Health

Device Health complements Upgrade Readiness and Update Compliance by helping to identify devices crashes and the cause. Device drivers that are causing crashes are identified along with alternative drivers that might reduce crashes. Windows Information Protection misconfigurations are also identified.

MBR2GPT (MBR -> GPT)

MBR2GPT.EXE, introduced in the Windows 10 1703 (Creator’s Update), converts a disk from Master Boot Record (MBR) to GUID Partition Table (GPT) partition style without modifying data on the disk. Previously, it was necessary to image, then wipe and reload a disk to change from MBR format to GPT.

The tool is designed to be run from a Windows PE command prompt, but can also be run from the full Windows 10 operating system (OS) by using the /allowFullOS option.

GPT enables the use of larger disk partitions, added data reliability, and faster boot and shutdown speeds

GPT also enables the use of the Unified Extensible Firmware Interface (UEFI) which replaces the BIOS. Security features of Windows 10 that require UEFI mode include: Secure Boot, Early Launch Anti-malware (ELAM) driver, Windows Trusted Boot, Measured Boot, Device Guard, Credential Guard, and BitLocker Network Unlock.

MBR2GPT.EXE is located in the Windows\System32

Windows ADK for Windows 10

The Windows Assessment and Deployment Kit (Windows ADK) is a suite of tools to asses and deploy Windows. A version is released for each version of Windows with the current version being Windows ADK for Windows 10, version 1709. It used to be called the Windows Automated Installation Kit (AIK) (for Windows 7).

DISM is used to mount and service Windows images.

  • Mount an offline image
  • Add drivers to an offline image
  • Enable or disable Windows features
  • Add or remove packages
  • Add language packs
  • Add Universal Windows apps
  • Upgrade the Windows edition

Sysprep prepares a Windows for imaging and allows you to capture a customized installation.

  • Generalize a Windows installation
  • Customize the default user profile
  • Use answer files

Windows PE (WinPE) is a small operating system used to boot a computer that does not have an operating system. You can boot to Windows PE and then install a new operating system, recover data, or repair an existing operating system.

  • Create a bootable USB drive
  • Create a Boot CD, DVD, ISO, or VHD

Windows Recovery Environment (Windows RE) is a recovery environment that can repair common problems.

Windows System Image Manager (Windows SIM) creates “answer files” that change Windows settings and run scripts during installation.

  • Create answer file
  • Add a driver path to an answer file
  • Add a package to an answer file
  • Add a custom command to an answer file

Windows Imaging and Configuration Designer (ICD) customizes and provisions Windows 10. It’s a similar concept to using imagex, by importing applications, updating drivers, etc.

  • Build and apply a provisioning package
  • Export a provisioning package
  • Build and deploy an image for Windows 10 for desktop editions

When using it to provision Windows 10:

Volume Activation Management Tool (VAMT)

The Volume Activation Management Tool (VAMT) allows you to automate and centrally manage the Windows, Office, and other Microsoft products volume and retail-activation process. VAMT can manage volume activation using Multiple Activation Keys (MAKs) or the Windows Key Management Service (KMS).

https://docs.microsoft.com/en-us/windows/deployment/images/volumeactivationforwindows81-18.jpg

User State Migration Tool (USMT)

The User State Migration Tool (USMT) is a user-profile migration tool. USMT includes three command-line tools: ScanState.exe, LoadState.exe, and UsmtUtils.exe. USMT also includes a set of three modifiable .xml files: MigApp.xml, MigDocs.xml, and MigUser.xml. You can create custom migration .xml files and you can also create a Config.xml file to specify files or settings to exclude from the migration.

The USMT broadly works in these three steps:

  1. Configure USMT: Make copies and modify the three migration XML files

    
      MigApp.xml, MigDocs.xml, and MigUser.xml
      
  2. Scan Source Computer

    
      scanstate \\server\migration\mystore /config:config.xml /i:migdocs.xml /i:migapp.xml /v:13 /l:scan.log
      
  3. Load results on Destination Computer

    
      loadstate \\server\migration\mystore /config:config.xml /i:migdocs.xml /i:migapp.xml /v:13 /l:load.log
      

Windows To Go

Windows To Go allows you to boot a fully manageable Windows environment on a USB jump drive. You insert the USB drive (known as a Windows To Go workspace) into a computer to boot and run a managed Windows 10 system.

You can easily start the wizard by opening Windows To Go in the Control Panel.

by David Maiolo 2018-03-2018

Overview Co-management for Windows 10 Devices

Starting with Configuration Manager 1710, co-management allows you to concurrently manage Windows 10 1709 by using both Configuration Manager and Intune. It’s a solution that provides a bridge from traditional to modern management and allows a phased transition between the two products.

There are two major paths you can take to co-management:

  • Configuration Manager provisioned co-management: Windows 10 devices managed by Configuration Manager and hybrid Azure AD joined get enrolled into Intune.
  • Intune provisioned devices that are enrolled in Intune: Install with the Configuration Manager client becomes a co-management state.

Prerequisites:

  • Configuration Manager version 1710 +
  • Windows 10 1709 (Fall Creators Update) devices
  • Azure AD
  • EMS or Intune license for all users
  • Azure AD automatic enrollment enabled
  • Intune subscription (MDM authority in Intune set to Intune)
  • If Configuration Manager client is installed: Hybrid Azure AD joined (joined to AD and Azure AD)
  • If Configuration Manager client is NOT installed: Cloud Management Gateway

Hybrid vs Co-Management

Although they sound similar, they are not the same thing. Co-management lets you concurrently manage devices in both Intune and Configuration Manager Console. Hybrid MDM with Configuration Manager integrates Intune’s MDM capabilities into Configuration Manager. In Hybrid, you can no longer use the Intune console.

If you have a hybrid MDM environment), you cannot enable co-management. You would need to first migrate to Intune standalone.

What Intune Can Manage with Co-Management

Once co-management is enabled, Configuration Manger still performs all of the traditional tasks that it always has. Now, Intune can also manage:

  • Compliance Policies (compliance for Conditional Access)
  • Windows Update for Business Policies
  • Resource Access Policies (policies which configure VPN, email and certificate settings)

Intune can also perform the following remote tasks on the Windows 10 devices:

  • Factory reset
  • Selective wipe
  • Delete devices
  • Restart device
  • Fresh start

How to Enable Co-Management

Co-management can be enabled for Windows 10 devices both when they are enrolled in Intune and when they are existing Configuration Manager Clients. Either result a Windows 10 device concurrently managed by Configuration Manager and Intune (as well as joined to both AD and Azure AD).

Windows 10 Devices enrolled in Intune

When devices are enrolled in Intune first, you can install the Configuration Manager client on the devices by creating a new line-of-business spp in Intune and use your ccmsetup.msi file with the following command line:


ccmsetup.msi 
CCMSETUPCMD="/mp:/ CCMHOSTNAME= 
SMSSiteCode= 
SMSMP=https:// 
AADTENANTID= 
AADTENANTNAME= 
AADCLIENTAPPID= 
AADRESOURCEURI=https://

Then, you enable co-management from the Configuration Manager console.

Brand New Windows 10 Devices

For new devices you can use Windows AutoPilot to configure the Out of Box Experience (OOBE), which includes automatic enrollment that enrolls devices in Intune.

First, create a new Windows AutoPilot Deployment Program profile in intune:

Then, find the devices you want the profile enabled for and assign the profile to those devices.

Windows 10 Configuration Manager Clients

You can enroll these devices and enable co-management from the Configuration Manager console. Configuration Manager starts automatic enrollment into Intune based on the Azure AD tenant they belong to.

Configure Configuration Manager for Co-Management

A few things are left to be done. First, we need to enable co-management in the Configuration Manager Console. Then, we need to start switching specific Configuration Manager workloads to Intune.

  1. In the Configuration Manager console, go to Administration > Overview > Cloud Services > Co-management then click  Configure co-management to open the Co-management Configuration Wizard.
  2. Sign In to your Intune tenant, and then click Next.
  3. On the Enablement page, choose either Pilot or All to enable Automatic enrollment in Intune, and then click Next.
  4. On the Workloads page, choose to switch Configuration Manager workloads to be managed by Pilot Intune or Intune, and then click Next.
  5. To enable co-management, complete the wizard.

https://i0.wp.com/jerrymeyer.nl/wp-content/uploads/2017/11/SCCM_CB_1709_Co_Management_Setup_3.jpg?resize=650%2C341&ssl=1

Check compliance for co-managed devices

Use the Software Center to detect compliance for co-managed Windows 10 devices. You can check this compliance regardless of whether conditional access is managed by Configuration Manager or Intune. You can also check compliance with the Company Portal app when conditional access is managed by Intune.

by David Maiolo 2018-03-16

Cloud-Based Management Service Overview

Internet-based client management has been available for years in Configuration Manger, however it’s generally not very easy to setup, with an estimated 10% of Microsoft’s Configuration Manager install-base having actually used it.

Starting with the Configuration Manager 1610 release, management of internet-based clients is now available through an Azure hosted service called the Cloud Management Gateway. This is done through a new role called the cloud management gateway connector point. Once the role is added in Configuration Manger, it becomes the point your internet-based clients proxy back into your On-Premises Configuration Manager services, or your Azure hosted Configuration Manger services.

The strategic goal in adding the Cloud Management Gateway to your environment is to provide an intermediary cloud solution on your roadmap to a full cloud management solution of your Windows 10 devices through Microsoft Intune. In this intermediary stage you still have access to your traditional agent based management from Configuration Manger while extending the perimeter for clients that roam on the internet. The Cloud Management Gateway is accomplished without adding additional infrastructure and without exposing any of your infrastructure to the internet.

The one drawback to this service from traditional Internet-Based client management is that it requires a Microsoft Azure monthly subscription for the cloud service.

Deployment and Configuration of the Cloud Management Gateway

Configuration Manager 1610 introduced the cloud management gateway to offer a simpler way to manage your internet-based clients. The cloud management gateway service is deployed to Azure and requires an Azure subscription.

The high-level certificate steps are:

  • Create and issue a custom Web Service Certificate (SSL Cert)
  • Request the Web Service Certificate (SSL Cert) from your CA
  • Export the custom Web Service Certificate (SSL Cert)
  • Create a Client Authentication Certificate
  • Create an Auto-Enroll Group Policy for the Client Authentication Certificate
  • Export the Client Root Certificate (CA / PKI Cert)
  • Upload the Management Cert to Azure

The high-level SCCM console management steps are:

  • Create the Cloud Management Gateway in SCCM
  • Add the Cloud Management Gateway Connector Point role
  • Configure the Management Point for the Cloud Management Gateway
  • Verify the Client is communicating with the Cloud Management Gateway

Requirements

  • Cloud Management Gateway Connection Point role added to Site Server
  • Azure subscription
  • Client Certificate (Management Cert)
  • Web Certificate (SSL Cert)
  • Root Certificate (CA / PKI Cert)

Limitations

  • Each CMG Supports 4000 clients
  • CMG only supports MP and SUP roles
  • No Client Push
  • No OSD or Task Sequences
  • Wake on LAN
  • Peer cache

Understanding the Required Certificates

Web Service Certificate (SSL Cert)

The Web Service Certificate is used by Cloud Management Gateway when authenticating with the clients. It’s recommended that this certificate come from a Public CA and the certificate subject name match the public domain of your company. This certificate will be exported to a file which will be the Management Cert

Management Cert

The Management Certificate is used to authenticate SCCM with Azure and configure and setup the instances of Cloud Management Gateway. After the certificate is created, go ahead and upload the certificate into the Azure portal.

Client Cert

A client cert is required to be on any computer that will be managed by the Cloud Management Gateway. It also needs to be on you site server hosting the Cloud Management Gateway connection point. You can deploy the client certificate to your SCCM clients with an auto-enrollment GPO. Once the Client Cert is on a machine, the Client’s Root Certificate needs to be exported. This will become the Client Root Certificate (CA / PKI Cert)

Client Root Certificate (CA / PKI Cert)

The Root Certificate for the clients PKI certificate. Internet-based clients still require the use of PKI certificates to authenticate with Configuration Manager. This is the root of that PKI certificate.

Create the Cloud Management Gateway in SCCM

Now that you have your certificates created, you need to enable the Cloud Management Gateway feature in the console by going to Administration -> Overview -> Cloud Services -> Updates and Servicing -> Features and right-click to the turn the feature.

Next, go to Administration -> Overview -> Site Configuration -> Sites and set the Client Computer Communication to HTTPS or HTTP and Use PKI Client Cert

Now, we will create the Cloud Management Gateway. Go to Administration -> Overview -> Cloud Services -> Cloud Management Gateway. Click Create Cloud Management Gateway.

Next add your Azure Subscription ID and Upload the Management Cert

On next page we’ll upload the Web Service Certificate (SSL Cert) and the Client Root Certificate (CA / PKI Cert)

Once the wizard finishes, you will see the Cloud Management Gateway Provision and then Complete.

Add the Cloud Management Gateway Connector Point role

Now we need to add the Cloud Management Gateway Connector Point role. Go to Administration -> Overview -> Site Configuration -> Server and Site Roles. Then add the role.

Go through the wizard with the default settings and make sure it chose the Cloud Management Gateway you created earlier.

Configure the Management Point for the Cloud Management Gateway

Now we have to tell the Management Point that is OK to accept Cloud Management Gateway Traffic. Go to Administration -> Overview -> Site Configuration -> Servers and Site Server Roles. Open your Management Point Properties.

Select the settings to Allow Configuration Manager Cloud Management Gateway Traffic.

Verify the Client is communicating with the Cloud Management Gateway

Finally, we need to verify everything is working. Connect one of your clients to an external internet connection such as a home Wi-Fi.

Run a Machine Policy Retrieval & Evaluation cycle from the Configuration Manager app

And finally verify under the Network Tab that you are connected to your Cloud Management Gateway

If you need more help creating certificates or have custom settings you would like to apply to your Cloud Management Gateway, consult the latest Microsoft Documentation for setting up a Cloud Management Gatewy.

In this tutorial, I provide an overview of Process Monitor (ProcMon), a powerful Windows monitoring tool. I explain how to start and filter ProcMon, find changed values, enable boot logging, and run ProcMon against a remote machine. I created this tutorial to practice key concepts for my upcoming interview for the Senior Solutions Architect position at Microsoft. By mastering ProcMon and other tools in the Windows Sysinternals suite, I was able to showcase my troubleshooting and diagnostic skills to the Microsoft hiring team.

Process Monitor (ProcMon) Overview

Process Monitor is a monitoring tool for Windows that shows live file, Registry and process/thread activity. It is a combination of two older Sysinternals utilities, Filemon and Regmon.

Process Monitor is a part of Windows Sysinternals which is a set of utilities to manage, diagnose, troubleshoot, and monitor Windows. Sysinternals was originally created in 1996 by Winternals Software and was started by Bryce Cogswell and Mark Russinovich. Microsoft acquired Winternals on July 18, 2006, which included Sysinternals and the utilities within it.

The set of tools is now available on any Windows computer by opening \\live.sysinternals.com\tools\ in file explorer. This UNC path is a service provided by Microsoft and is referred to as Sysinternals Live.

Starting Process Monitor

You must run ProcMon.exe from an elevated command prompt, so that it opens in administrative mode as it needs to install Filter Drivers. As soon as you start it, it will begin capturing, and quite quickly will start taking space from your paging file. Therefore, only run it for the necessary time as leaving it running will likely cause your computer to crash unless you run it to Drop Filtered Events against a certain filter. More on this under Filter Process Monitor

Filtering with Process Monitor

ProcMon can be run for days if you chose to have it filter for a certain type of event. Start by selecting Filter -> Drop Filtered Events.

Choosing this option means that only what is filtered will be saved to the log file, as opposed to only filtering will filter what you see, but will log all to the log file. Now, filter to only view processes where the result is Access Denied by opening Filter -> Filter:

You can also filter right from the main console by selecting a Process, right clickign and choosing one of the filtering options. For example, if we choose to Exclude Events After this event, we can also see that it automatically creates a filter for this choice which we can choose to remove later.

Once you have a specific filter set that might be useful for a certain troubleshooting task, you can choose to Save or Load the filters under the Filter menu:

Advanced Filtering with Process Monitor

In some instances, you may want to view all events, including those by default that are filtered out of ProcMon. You could opt to manually remove all of the built in filters, but an easier way to do this is to simply select Filter -> Enabled Advanced Output

How to Find Changed Values

Some people will use ProcMon to try and see what changes a process makes, but it can become daunting. An easier method is to try and utilize ProcMon in a way where you can filter for events happening. Let’s say for example we want to see what registry values are set when we disable Automatic Restart on system failure. To do this, first stop and clear the trace, then filter ProcMon to only show RegSetValue Operations:

Now, begin the capture and make the desired change:

Stop the capture once the change has been made. Now, we can easily see the registry value change that was required to make this change:

Enable Boot Logging

A very useful feature of process monitor is to trace events during logoff, shutdown, startup and login. There is a special feature to do this ProcMon to do this under the Options menu. Select Enable Boot Logging and then reboot your system. The next time you open ProcMon, you’ll be prompted to save the boot log events to a file.

ProcMon Tools: Process Tree

ProcMon has several tools available by selecting Tools from the menu. For example the Process Tree shows you the processes lifetime and how long they lived during the trace.

Running ProcMon against a Remote Machine

Utilizing psexec, 23 can run ProcMon against a remote machine if we do not or cannot be at a remote site for monitoring.

To start the trace on a remote computer run:


Psexec \\ /s /d procmon.exe /accepteula /quiet /backingfile c:\hostname_trace.pml

Now, to stop the trace on the remote computer run:


Psexec \\ /s /d procmon.exe /accepteula /terminate

Finally, copy the log file to your remote machine for viewing:


xcopy \\\c$\hostname_trace.pml c:\TEMP

You can then view the log file in ProcMon locally by running:


Procmon /openlog c:\temp\hostname_trace.pml

ProcMon Filter Drivers

If ProcMon has some issue connecting to the filter driver and gets stuck opening, you can run it to not connect to the filter driver:


Procmon /noconnection

To view the filter driver that is associated to Procmon, run


fltmc

I created a tutorial for Process Explorer (ProcExp) to help me practice my skills for an upcoming interview to be a Sr Solutions Architect at Microsoft. Process Explorer is a tool within the Windows Sysinternals utilities that shows information about which handles and DLLs processes have opened or loaded. This tutorial covers a variety of topics, including how to start ProcExp in administrative mode, how to find running processes and those that close quickly, how to understand threads with Service Host (svchost.exe), and how to hunt for a virus. I also cover how to enable additional columns in ProcExp, and how to save column sets for future use. This tutorial helped me develop my technical skills and become more familiar with the Sysinternals toolkit.

Process Explorer (ProcExp) Overview

Process Explorer shows you information about which handles and DLLs processes have opened or loaded. This is the most downloaded tool of the Sysinternals toolkit, with over 3 Million downloads a year.

Process Explorer is a part of Windows Sysinternals which is a set of utilities to manage, diagnose, troubleshoot, and monitor Windows. Sysinternals was originally created in 1996 by Winternals Software and was started by Bryce Cogswell and Mark Russinovich. Microsoft acquired Winternals on July 18, 2006, which included Sysinternals and the utilities within it.

The set of tools is now available on any Windows computer by opening \\live.sysinternals.com\tools\ in a file explorer. This UNC path is a service provided by Microsoft and is referred to as Sysinternals Live.

Starting Process Explorer

I recommend starting ProcExp.exe from an elevated command prompt, so that it opens in administrative mode. If you start ProcExp in standard mode, you’ll notice it has extra options to Show Details for All Processes:

Also, if you every have issues opening ProcExp, you should clear its registry key at HKEY_CURRENT_USER\Software\Sysinternals.

One of the most useful ways to run ProcExp is before logon, or as a replacement to Task Manager. When you select to have process explorer replace task manager, it is actually making use of the Image File Execution Options which replaces taskmon.exe with procmon.exe.

Another useful way to start ProcMon is at the Windows Logon Screen (CTRL+ALT+DEL). You can do this by adding an Image File Execution Option for Sticky Keys (sethc.exe) and have it open cmd.exe. Once at the logon screen, press Shift 5 times and cmd.exe will open where you can run process explorer. This is useful to diagnose headless servers, etc.

Views in ProcExp

You can enable several additional columns in process explorer. To do this, right click on the columns and click Select Columns. In this example I have chosen columns that would help with debugging malware:

You can then choose to save the column set for future use by selecting View -> Save Column Set

Now, if you create multiple Column Sets you can toggle between them by entering CTRL+1, CTRL+2, etc.

Finding Running Processes

One of the best ways to determine what process a certain application is, you can use the target tool to click it, and the process will become highlighted. In this example, I click the Registry Editor

You can also reverse this process by right clicking a process and selecting to bring it to the front:

Finding Processes That Close Quickly

If you are trying to track down a process that is only very briefly popping up or closing quickly, there is a trick to see what processes might be doing this. To see them, first pause ProcExp by hitting the Space Key. Now, wait for the process to pop up on the screen, and then hit F5. Now, in ProcExp all new processes between the time you hit Space and F5 will be highlighted:

Threads and Stacks

Because Task Manager cannot see Threads and Stacks, this is one of the best uses of ProcExp. In a CPU core, only threads run. Processes are more like buckets that contain many threads to run that are given their own memory allocation, etc. Stacks are integral to a thread and represents the stack of instructions in the memory that is associated to the thread running. Stacks are just like a stack of plates where you can Pop and Push items off the stack.

Understanding Threads with Service Host (svchost.exe)

If you wanted to know why a process crashed or is using a lot of memory, it could be any one of the threads within it causing the problem. For example Service Host (svchost.exe), the process that runs all services, is one of the most common processes to eat up memory. Many Service Hosts run on a modern Windows OS, because the granularity to have svchost.exe run with different permissions has increased as security has increased and the need to separate processes from a service if a service needed to be stopped. This introduced things like Service SIDs and Service Privileges. For example, to get an access token you need a Service SID. To view the Service SID associated to a Service, you can find it with

sc showsid <servicename>

For example, you might know that Trusted Installer owns everything in Windows, but Trusted Installer is actually the Windows Module Installer Service:

Back to ProcExp, we can view the Access Token and privileges associated to the svchost running by going to Properties, then the Security tab:

Next, then look at the Services tab which will show you the Binaries that are associated to the services running under the svchost you selected.

Finally, we can view the threads running in the svchost selected to try and debug exactly what might be causing a hang, etc.

As a note, a computer running Windows 10 1703 and above, with more than 3,484 MB of RAM, will have every service placed in its own Service Host (svchost.exe). This should make debugging Services a little bit easier.

Hunting for a Virus

First, choose the process you think might be associated to the virus. To do this, you can either look at which process is consuming the most CPU, but also you Verify Image Signatures and Check VirusTotal.com for relations to the process with a virus.

If the Signer is not verified, it doesn’t mean that it is a virus, but it warrants more investigation. The VirusTotal column shows if any engines found the hash value of the executable associated to something malicious.

Resource Usage in ProcExp

Another great use of ProcExp is determining what handles are open for a process. For example, if you wanted to see any process that had a handle on PhotoShop, click Find -> Find Handle or DLL which will allow you to find the processes, threads and DLLs associated:

Saving ProcExp Data

You can save a snapshot of the current data by selecting File -> Save As which will save a text file of the current view, with expanded details on the process you had highlighted:

In this Netsh Networking Shell Tutorial, I explain how to use the Netsh command line scripting utility that has been around since Windows Server 2003. Although somewhat depreciated by cmdlets available in PowerShell, Netsh can allow you to view or change the network configuration of your local computer or a remote computer. The tutorial takes you through how to browse around the tool, open contexts, and use sub-contexts to navigate through commands. It also covers how to use Netsh to manage remote servers and workstations, popular Netsh commands, and even provides an example batch file. The reason I created this tutorial was to help me improve my understanding of Netsh before my then-upcoming Microsoft interview to be a Senior Solutions Architect.

Netsh Overview

Netsh, pronounced just as it’s spelt as “netch”, is a command line scripting utility that’s been around since Windows Server 2003. Although somewhat depreciated by cmdlets available in PowerShell, Netsh can allow you to view or change the network configuration of your local computer or a remote computer. Netsh can be run at the command line or built into a script inside of a batch file.

To start Netsh, open a command line shell or PowerShell and type:


Netsh

How to Browse Netsh

Once inside Netsh, type “?” to see a list of commands available to you:

Note how it describes the list as “commands in this context”. Contexts are groups of commands available to you once you are inside of their context. Contexts can be nested in other Contexts and you’ll see it lists what sub-contexts are available. To get inside a context, just type its name, such as interface, and again a “?” will show you what commands area available to you in that context.

This is generally how you browse around. By opening contexts, typing “?” to see what is available, and typing sub-contexts to get even deeper until you find the command you want.

Also, note how it mentions that PowerShell should be used rather than Netsh for TCP/IP commands. This is true for most Netsh commands, so just keep in mind that, although useful, PowerShell has largely taken over Netsh.

Using Netsh to Manage Remote Servers and Workstations

While you’re still at the cmd line shell (net yet into Netsh), you can invoke Netsh against a remote computer by following this format:


Netsh -r  -u <domain\user> -p  

In this example, we can see the IPV4 information on the remote computer 63769:

For the -r argument, you can supply the hostname as either the IP address, the hostname, or the FQDN of the remote host.

Running Commands in Netsh

The general syntax to run a netsh command is:


netsh[ -a AliasFile] [ -c Context ] [-r RemoteComputer] [ -u [ DomainName\ ] UserName ] [ -p Password | *] [{NetshCommand | -f ScriptFile}]

For example, to open a firewall port on a remote computer:


netsh –r WORKSTATION001 –u DOMAIN\User –P P@ssw0rd! advfirewall set portopening tcp 445 smb enable

Additionally, some commands require a parameter string. In the case where the parameter string requires a space, be sure to include it in quotes:


interface="Wireless Network Connection"

Popular Netsh Commands

  1. Show the IP configuration
    
    netsh interface ip show config
    
  2. Show IPv4 or IPv6 information
    
    netsh interface ipv6 show address
    
  3. Open a Firewall Port
    
    netsh advfirewall firewall
      add rule name="HTTPS"
      dir=in action=allow protocol=TCP localport=443
    
  4. Show Network Adapter Status
    
    netsh interface show interface
    
  5. Configure adapter for static IP Address
    
    netsh interface ip
      set address "Local Area Connection"
      static 192.168.0.100
      255.255.255.0 192.168.0.254 1
    
  6. Configure adapter to use DHCP
    
    netsh interface ip 
      set dns "Local Area Connection" dhcp
    

Example Batch File

This is an example batch file:


netsh wins server 192.168.125.30 add name Name=MY\_RECORD EndChar=04 IP={192.168.0.205}

netsh wins server 192.168.125.30 add partner Server=192.168.0.189 Type=2
  
netsh wins server 192.168.0.189 add partner Server=192.168.125.30 Type=2
  
netsh wins server 192.168.125.30 init push Server=192.168.0.189 PropReq=0
  
netsh wins server 192.168.0.189 show name Name=MY\_RECORD EndChar=04

Overview

I created this tool, SCCM SUG to Configuration Baseline, to allow you to easily convert an SCCM Software Upgrade Group to a Configuration Baseline. This would most likely be used if you wanted to target a specific Client Setting or Application based on computers which fail compliance for a particular Software Update Group to a collection. Although this might already be possible to do by selecting the software updates within a Software Update Group and creating a Configuration Baseline as a result, this tool can easily automate the process on a schedule in the background.

This tool gathers Software Updates within a Software Update Group via queries to WMI on your SCCM site server, then builds a Configuration Baseline XML and XML Resource File (.RESX) with those items that are used to import back into SCCM. The tool can either compress them as a .CAB file for direct importing via the SCCM Console GUI or import them through a WMI instance POST.

Installing the Tool

Download the required file below and unzip them into a directory of your choosing.

Once downloaded, edit New-DGMSCCMSUGBaseline.ps1 and update the line at the very end of the script to include your SCCM Site Server (ProviderMachineName), your siteCode and the Software Update Group Name (SUGName).


New-DGMSCCMSUGBaseline -ProviderMachineName SCCMSERVER001 -siteCode XXX -SUGName "Software Upgrade Group Name"

You can also specify a FileSavePath which is the location the XML and RESX and CAB files will save if you would like to manipulate them or import them manually.


-fileSavePath "C:\users\username\Desktop\"

Additionally, the tool will require and import the SCCM module \ConfigurationManager.psd1. This module is included when you install the SCCM Console, so typically this script needs to be run on a computer that has the SCCM Console installed, and from an account that can make WMI queries against the SCCM Site Server.

Using the Tool

To start the tool, run the New-DGMSCCMSUGBaseline.ps1 script within PowerShell. Remember to include your desired Software Update Group name as the SUGName argument. You can also pipe the SUGName into the tool by running it as:


"Software Upgrade Group Name" | New-DGMSCCMSUGBaseline -ProviderMachineName SCCMSERVER001 -siteCode XXX

This could be useful if you’d like to automate or iterate through a group of Software Update Groups such as this example which would convert every one of your Software Update Groups to a Configuration Baseline:


$SoftwareUpdateGroups = Get-CMSoftwareUpdateGroup | Select Name

Foreach  ($SoftwareUpdateGroup in $SoftwareUpdateGroups){
   $SoftwareUpdateGroup | New-DGMSCCMSUGBaseline -ProviderMachineName SCCMSERVER001 -siteCode XXX
}

If the tool was successful, you will see it create the importation files in the directory you chose (if none was chosen, look in C:\SUG):

Additionally in SCCM you will see a new Configuration Baseline based off the Software Update Group You Specified:

CB.Software.Update.(SVR – 1 – Pilot Server Updates – Net Framework 2018-02-13 07:50:09)

If a Configuration Baseline already exists you will be prompted if would like to replace it. If you’d like to automatically replacing the baseline without prompting you, replace this if logic within the Get-DGMSCCMWMISUGConfigurationBaselineDetails Function:


#A SUG Baseline Already Exists for a Valid SUG Group Name. Let's determine from the user if this should be replaced
if ((Read-Host $ProviderMachineName ": A Baseline for $SUGName already exists. Do you want to proceed? (Y/N)").Tolower() -eq "n")

With something that will never occur, such as


if ($x = “theskyisblue”)

Understanding the Exported XML

When the tool is run, it will create an XML of the Software Updates that will be imported back into SCCM. This is an XML in the same format that would be inside of a Configuration Baseline CAB file if you to export one from the Console. It can be useful to understand how this file works if you’d like to manipulate the one I create with this tool, or one that is created from a Configuration Baseline export in the console:

PowerShell Function: New-DGMSCCMSUGBaseline.ps1


function New-DGMSCCMSUGBaseline{
<#  
.SYNOPSIS  
    Create a Configuration Baseline Based off a Software Update Group  
.DESCRIPTION  
    Create a Configuration Baseline Based off a Software Update Group 
.NOTES  
    File Name  : New-DGMSCCMSUGBaseline.ps1  
    Author     : David Maiolo - david.maiolo@gmail.com
    Version    : 2018-03-07
.LINK  
#>
 
    param(

        [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$true)]
        $SUGName,
        [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
        $ProviderMachineName,
        [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
        $siteCode,
        [Parameter(Position=3,Mandatory=$false,ValueFromPipeline=$false)]
        [string]$fileSavePath = "c:\SUG\"
        

    )

    #Connect to SCCM
    # Import the ConfigurationManager.psd1 module
    $module = "$($ENV:SMS_ADMIN_UI_PATH)\..\ConfigurationManager.psd1"
    if((Get-Module ConfigurationManager) -eq $null) {
        Write-Host Importing $module ...
        Import-Module $module -Force
    }

    # Connect to the site's drive if it is not already present
    if((Get-PSDrive -Name $SiteCode -PSProvider CMSite -ErrorAction SilentlyContinue) -eq $null) {
        $NewPSDrive = New-PSDrive -Name $SiteCode -PSProvider CMSite -Root $ProviderMachineName
    }


    function Get-DGMSUGGroupID{
    <#  
    .SYNOPSIS  
        Query WMI to get Configuration ID of Software Update Group  
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
        [CmdletBinding()]
        [Alias()]
        Param
        (
            [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$ProviderMachineName,
            [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
            [ValidateLength(3,3)]
            [String]$Sitecode,
            [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$SUGName
        )
        

        #Set SCCM WMI NameSpace
        $SCCMnameSpace = "root\SMS\SITE_$siteCode"

        #Query for Software Update Group Information
        $qry = "SELECT CI_ID FROM SMS_AuthorizationList where LocalizedDisplayName = '$SUGName'"

        try{
            $objComputerSystemProduct = Get-WmiObject -ComputerName $ProviderMachineName -Namespace $SCCMnameSpace -Query $qry
            if ($objComputerSystemProduct -eq $null){
                 Write-Host $ProviderMachineName ": An invalid SUG Group name was SUGplied. Exiting." -foregroundcolor red
                 break
            }else{
                #Write-Host $ProviderMachineName ": Succesfully queried WMI for SUG Configuration ID." -foregroundcolor green
                return $objComputerSystemProduct.CI_ID
            }
                
        }catch{
            Write-Host $ProviderMachineName ": Could NOT query WMI for SUG Configuration ID:" ($error[0]) -foregroundcolor red
            break
        }

    }

    function Get-DGMSCCMWMISUGGroupChildren{
    <#  
    .SYNOPSIS  
        Query WMI to get all Software Updates in a Software Updae Group  
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
        [CmdletBinding()]
        [Alias()]
        Param
        (
            [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$ProviderMachineName,
            [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
            [ValidateLength(3,3)]
            [String]$Sitecode,
            [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$SUGConfigurationID
        )
        

        #Set SCCM WMI NameSpace
        $SCCMnameSpace = "root\SMS\SITE_$siteCode"

        #Query for Software Update Group Information
        $qry = "SELECT upd.* FROM SMS_SoftwareUpdate upd, SMS_CIRelation cr WHERE cr.FromCIID= $SUGConfigurationID AND cr.RelationType=1 AND upd.CI_ID=cr.ToCIID"

        try{
            $objComputerSystemProduct = Get-WmiObject -ComputerName $ProviderMachineName -Namespace $SCCMnameSpace -Query $qry
            if ($objComputerSystemProduct.Length -le 0){
                 Write-Host $ProviderMachineName ": An invalid SUG CI ID was SUGplied or no Software Updates exist in the SUG. Exiting." -foregroundcolor red
                 break
            }else{
                #Write-Host $ProviderMachineName ": Succesfully queried WMI for SUG Group Information." -foregroundcolor green
                return $objComputerSystemProduct
            }
                
        }catch{
            Write-Host $ProviderMachineName ": Could NOT query WMI for SUG Group Information:" ($error[0]) -foregroundcolor red
            break
        }

    }

    function Get-DGMSCCMWMISUGConfigurationBaselineDetails{
    <#  
    .SYNOPSIS  
        Query WMI to get Details of a Configuration Baseline  
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
        [CmdletBinding()]
        [Alias()]
        Param
        (
            [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$ProviderMachineName,
            [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
            [ValidateLength(3,3)]
            [String]$Sitecode,
            [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$SUGName,
            [Parameter(Position=3,Mandatory=$true,ValueFromPipeline=$false)]
            $SUGGroupChildren
        )
        
        #Build Baseline Name
        $SUGBaselineNameStart = "CB.Software.Update."
        $SUGBaselineName = $SUGBaselineNameStart + "(" + $SUGName + ")"

        #Set SCCM WMI NameSpace
        $SCCMnameSpace = "root\SMS\SITE_$siteCode"

        #Query for Software Update Group Information
        $qry = "SELECT * FROM SMS_ConfigurationBaselineInfo where LocalizedDisplayName = '$SUGBaselineName'"

        try{
            $objComputerSystemProduct = Get-WmiObject -ComputerName $ProviderMachineName -Namespace $SCCMnameSpace -Query $qry

            if ($objComputerSystemProduct -eq $null){
                #No SUG Baseline Yet Exists for a Valid SUG Group Name. Let's set one up
                Write-Host $ProviderMachineName ": No SUG Baseline Exists Yet for $SUGName. Setting up details for XML..."
                $NewBaseLine = $TRUE
		        $ScopeID = $SUGGroupChildren[0].ModelName.Substring(0,$SUGGroupChildren[0].ModelName.IndexOf("/")) -replace "Site_", "ScopeID_"
		        $BaselineLogicalName = "Baseline_" + [guid]::NewGuid().ToString()
		        $BaselineVersion = 1
            }else{
                #A SUG Baseline Already Exists for a Valid SUG Group Name. Let's determine from the user if this should be replaced
                if ((Read-Host $ProviderMachineName ": A Baseline for $SUGName already exists. Do you want to proceed? (Y/N)").Tolower() -eq "n")
		        {
			        Write-Host $ProviderMachineName ": A duplicate baseline creation has been by the user, exiting without making changes." -ForegroundColor Yellow
			        break
		        }else{
                    $BaselineCI_ID = $objComputerSystemProduct.CI_ID 
		            $BaselineCI_UniqueID = $objComputerSystemProduct.CI_UniqueID
		            
                    $NewBaseLine = $FALSE
                    $ScopeID = $BaselineCI_UniqueID.substring(0,$BaselineCI_UniqueID.indexof("/"))
		            $BaselineLogicalName = $objComputerSystemProduct.CI_UniqueID.substring($objComputerSystemProduct.CI_UniqueID.indexof("/")+1)
		            $BaselineVersion = $objComputerSystemProduct.SDMPackageVersion + 1


                     #Query for CI Information
                    $qry = "SELECT * FROM SMS_ConfigurationItem where CI_ID = $BaselineCI_ID"

                    try{
                        $CI = Get-WmiObject -ComputerName $ProviderMachineName -Namespace $SCCMnameSpace -Query $qry
                        if ($CI -eq $null){
                             Write-Host $ProviderMachineName ": CI $BaselineCI_ID does not exist, no action taken." -foregroundcolor red
                             break
                        }
                    }
                    catch{
                        Write-Host $ProviderMachineName ": Could NOT query WMI for CI ID $BaselineCI_ID :" ($error[0]) -foregroundcolor red
                        break
                    }
                    
                }
            }
            

            $result = [PSCustomObject] @{
                'NewBaseLine' = $NewBaseLine;
                'SUGBaselineName' = $SUGBaselineName;
                'ScopeID' = $ScopeID;
                'BaselineLogicalName' = $BaselineLogicalName;
                'BaselineVersion' = $BaselineVersion;
                'CI' = $CI;
            }

            return $result
               
        }catch{
            Write-Host $ProviderMachineName ": Could NOT query WMI for SUG Baseline:" ($error[0]) -foregroundcolor red
            break
        }

    }

    function New-DGMSCCMWMISUGConfigurationBaselineXML{
    <#  
    .SYNOPSIS  
        Create a new XML formatted file that will be used as a Configuration Baseline Import
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
        [CmdletBinding()]
        [Alias()]
        Param
        (
            [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$ProviderMachineName,
            [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
            [ValidateLength(3,3)]
            [String]$Sitecode,
            [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
            $SUGConfigurationBaselineDetails,
            [Parameter(Position=3,Mandatory=$true,ValueFromPipeline=$false)]
            $SUGGroupChildren


        )

        try{

            $SUGBaselineName = $SUGConfigurationBaselineDetails.SUGBaselineName
            $ScopeID = $SUGConfigurationBaselineDetails.ScopeID
            $BaselineLogicalName = $SUGConfigurationBaselineDetails.BaselineLogicalName
            $BaselineVersion = $SUGConfigurationBaselineDetails.BaselineVersion

            $baselineXML = @"


  
  
  
    
      
      
    
    
    
    
    
    

"@

	foreach($SUGGroupChild in $SUGGroupChildren)
	{
		$ModelName = $SUGGroupChild.ModelName.Substring(0,$SUGGroupChildren[0].ModelName.IndexOf("/"))
		$LogicalName = $SUGGroupChild.ModelName.Substring($SUGGroupChildren[0].ModelName.IndexOf("/")+1)
		
        $baselineXML += @"
      

"@
	}

	    $baselineXML += @"
    
    
    
  

"@
        return $baselineXML
    }
    catch{
        Write-Host $ProviderMachineName ": Could NOT generate a Baseline XML based off the data provided:" ($error[0]) -foregroundcolor red
    }

    }

    function Get-DGMSCCMSUGXMLResource{
    <#  
    .SYNOPSIS  
        Create a new XML formatted resource file that will be used as a Configuration Baseline Import
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
    [CmdletBinding()]
        [Alias()]
        Param
        (
            [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)]
            $SUGConfigurationBaselineDetails
        )

        try{

            $SUGBaselineName = $SUGConfigurationBaselineDetails.SUGBaselineName
            $ScopeID = $SUGConfigurationBaselineDetails.ScopeID
            $BaselineLogicalName = $SUGConfigurationBaselineDetails.BaselineLogicalName
            $BaselineVersion = $SUGConfigurationBaselineDetails.BaselineVersion

            $ScopeID = $ScopeID -replace "Scope",""

            $SUGResourceXML = @"


  
  
    
    
      
        
          
            
              
                
              
              
              
              
              
            
          
          
            
              
              
            
          
          
            
              
                
                
              
              
              
              
              
            
          
          
            
              
                
              
              
            
          
        
      
    
  
  
    text/microsoft-resx
  
  
    2.0
  
  
    System.Resources.ResXResourceReader, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
  
  
    System.Resources.ResXResourceWriter, System.Windows.Forms, Version=4.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089
  
  
    $SUGBaselineName
  

"@

            return $SUGResourceXML
        }
        catch{
            Write-Host $ProviderMachineName ": Could NOT create XML Resource File:" ($error[0]) -foregroundcolor red
        }

    }

    function Import-DGMSCCMWMISUGConfigurationBaselineXML{
    <#  
    .SYNOPSIS  
        Import an XML formatted file and resource file that will be become a Configuration Baseline
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
        [CmdletBinding()]
        [Alias()]
        Param
        (
            [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$false)]
            [String]$ProviderMachineName,
            [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
            [ValidateLength(3,3)]
            [String]$siteCode,
            [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
            $DGMSCCMSUGXMLResource,
            [Parameter(Position=3,Mandatory=$true,ValueFromPipeline=$false)]
            $SUGConfigurationBaselineXML,
            [Parameter(Position=4,Mandatory=$true,ValueFromPipeline=$false)]
            $SUGConfigurationBaselineDetails
        )

        
        try{

            $NewBaseLine = $SUGConfigurationBaselineDetails.NewBaseLine
            $SUGBaselineName = $SUGConfigurationBaselineDetails.SUGBaselineName
            $ScopeID = $SUGConfigurationBaselineDetails.ScopeID
            $BaselineLogicalName = $SUGConfigurationBaselineDetails.BaselineLogicalName
            $BaselineVersion = $SUGConfigurationBaselineDetails.BaselineVersion

	        if ($NewBaseLine -eq $TRUE)
	        {
                Write-Host $ProviderMachineName ": Building Details for New Baseline: $SUGBaselineName..."


                 #Query for CI Information

                $LD = [PSCustomObject] @{
                'LocaleID' = 1033;
                'LocalizedData' = $DGMSCCMSUGXMLResource;
                }


                $CI = [PSCustomObject] @{
                    'SDMPackageLocalizedData' = $LD;
                    'IsBundle' = $false;
                    'IsExpired' = $false;
                    'IsUserDefined' = $true
                    'ModelID' = 16777367
                    'PermittedUses' = 0
                    'PlatformCategoryInstance_UniqueIDs' = "Platform:C92857DF-9FD1-4FAD-BAA1-BE9FAD4B4F74"
                    'SDMPackageXML' = $SUGConfigurationBaselineXML;
                }

	        }else{
                $CI = $SUGConfigurationBaselineDetails.CI

                Write-Host $ProviderMachineName ": Building Details for Pre-Existing Baseline: $SUGBaselineName..."
            }
	
	        if ($NewBaseLine -eq $FALSE) {
                Write-Host $ProviderMachineName ": Creating baseline..."
                $NameSpace = "root\SMS\SITE_$siteCode"
                Set-WmiInstance -ComputerName $ProviderMachineName -Namespace $NameSpace -Class SMS_ConfigurationItem -PutType Create -Argument $CI
            }else {
                Write-Host $ProviderMachineName ": Updating baseline..."
                $NameSpace = "root\SMS\SITE_$siteCode"
                Set-WmiInstance -ComputerName $ProviderMachineName -Namespace $NameSpace -Class SMS_ConfigurationItem -PutType UpdateOnly -Argument $CI
            }

            Write-Host $ProviderMachineName ": Baseline Import Succesful: $SUGBaselineName" -ForegroundColor Green
        }
        catch{
            Write-Host $ProviderMachineName ": Could NOT import XML data or XML Resource data for Baseline $SUGBaselineName into SCCM:" ($error[0]) -foregroundcolor red
        }
    }

    function New-CabinetFile {
    <#  
    .SYNOPSIS  
        Create a cabinet file using a list of files as the source. This is used best for importing into SCCM
    .NOTES  
        File Name  : New-DGMSCCMSUGBaseline.ps1  
        Author     : David Maiolo - david.maiolo@gmail.com
        Version    : 2018-03-07
    .LINK  
    #>
        [CmdletBinding()]
        Param(
            [Parameter(HelpMessage="Target .CAB file name.", Position=0, Mandatory=$true, ValueFromPipelineByPropertyName=$true)]
            [ValidateNotNullOrEmpty()]
            [Alias("FilePath")]
            [string] $Name,
 
            [Parameter(HelpMessage="File(s) to add to the .CAB.", Position=1, Mandatory=$true, ValueFromPipeline=$true)]
            [ValidateNotNullOrEmpty()]
            [Alias("FullName")]
            [string[]] $File,
 
            [Parameter(HelpMessage="Default intput/output path.", Position=2, ValueFromPipelineByPropertyName=$true)]
            [AllowNull()]
            [string[]] $DestinationPath,
 
            [Parameter(HelpMessage="Do not overwrite any existing .cab file.")]
            [Switch] $NoClobber
            )
 
        Begin { 
    
            ## If $DestinationPath is blank, use the current directory by default
            if ($DestinationPath -eq $null) { $DestinationPath = (Get-Location).Path; }
            Write-Verbose "New-CabinetFile using default path '$DestinationPath'.";
            Write-Verbose "Creating target cabinet file '$(Join-Path $DestinationPath $Name)'.";
 
            ## Test the -NoClobber switch
            if ($NoClobber) {
                ## If file already exists then throw a terminating error
                if (Test-Path -Path (Join-Path $DestinationPath $Name)) { throw "Output file '$(Join-Path $DestinationPath $Name)' already exists."; }
            }
 
            ## Cab files require a directive file, see 'http://msdn.microsoft.com/en-us/library/bb417343.aspx#dir_file_syntax' for more info
            $ddf = ";*** MakeCAB Directive file`r`n";
            $ddf += ";`r`n";
            $ddf += ".OPTION EXPLICIT`r`n";
            $ddf += ".Set CabinetNameTemplate=$Name`r`n";
            $ddf += ".Set DiskDirectory1=$DestinationPath`r`n";
            $ddf += ".Set MaxDiskSize=0`r`n";
            $ddf += ".Set Cabinet=on`r`n";
            $ddf += ".Set Compress=on`r`n";
            ## Redirect the auto-generated Setup.rpt and Setup.inf files to the temp directory
            $ddf += ".Set RptFileName=$(Join-Path $ENV:TEMP "setup.rpt")`r`n";
            $ddf += ".Set InfFileName=$(Join-Path $ENV:TEMP "setup.inf")`r`n";
 
            ## If -Verbose, echo the directive file
            if ($PSCmdlet.MyInvocation.BoundParameters["Verbose"].IsPresent) {
                foreach ($ddfLine in $ddf -split [Environment]::NewLine) {
                    Write-Verbose $ddfLine;
                }
            }
        }
 
        Process {
   
            ## Enumerate all the files add to the cabinet directive file
            foreach ($fileToAdd in $File) {
        
                ## Test whether the file is valid as given and is not a directory
                if (Test-Path $fileToAdd -PathType Leaf) {
                    Write-Verbose """$fileToAdd""";
                    $ddf += """$fileToAdd""`r`n";
                }
                ## If not, try joining the $File with the (default) $DestinationPath
                elseif (Test-Path (Join-Path $DestinationPath $fileToAdd) -PathType Leaf) {
                    Write-Verbose """$(Join-Path $DestinationPath $fileToAdd)""";
                    $ddf += """$(Join-Path $DestinationPath $fileToAdd)""`r`n";
                }
                else { Write-Warning "File '$fileToAdd' is an invalid file or container object and has been ignored."; }
            }       
        }
 
        End {
    
            $ddfFile = Join-Path $DestinationPath "$Name.ddf";
            $ddf | Out-File $ddfFile -Encoding ascii | Out-Null;
 
            Write-Verbose "Launching 'MakeCab /f ""$ddfFile""'.";
            $makeCab = Invoke-Expression "MakeCab /F ""$ddfFile""";
 
            ## If Verbose, echo the MakeCab response/output
            if ($PSCmdlet.MyInvocation.BoundParameters["Verbose"].IsPresent) {
                ## Recreate the output as Verbose output
                foreach ($line in $makeCab -split [environment]::NewLine) {
                    if ($line.Contains("ERROR:")) { throw $line; }
                    else { Write-Verbose $line; }
                }
            }
 
            ## Delete the temporary .ddf file
            Write-Verbose "Deleting the directive file '$ddfFile'.";
            Remove-Item $ddfFile;
 
            ## Return the newly created .CAB FileInfo object to the pipeline
            Get-Item (Join-Path $DestinationPath $Name);
        }
    }


    $DGMSUGGroupID = Get-DGMSUGGroupID -ProviderMachineName $ProviderMachineName -Sitecode $siteCode -SUGName $SUGName
    $DGMSCCMWMISUGGroupChildren = Get-DGMSCCMWMISUGGroupChildren -ProviderMachineName $ProviderMachineName -Sitecode $siteCode -SUGConfigurationID $DGMSUGGroupID
    $DGMSCCMWMISUGConfigurationBaselineDetails = Get-DGMSCCMWMISUGConfigurationBaselineDetails -Sitecode $siteCode -SUGName $SUGName -ProviderMachineName $ProviderMachineName -SUGGroupChildren $DGMSCCMWMISUGGroupChildren
    $DGMSCCMSUGXMLResource = Get-DGMSCCMSUGXMLResource -SUGConfigurationBaselineDetails $DGMSCCMWMISUGConfigurationBaselineDetails
    $DGMSCCMWMISUGConfigurationBaselineXML = New-DGMSCCMWMISUGConfigurationBaselineXML -ProviderMachineName $ProviderMachineName -Sitecode $siteCode -SUGConfigurationBaselineDetails $DGMSCCMWMISUGConfigurationBaselineDetails -SUGGroupChildren $DGMSCCMWMISUGGroupChildren
    

    #Create Resource Files and Cab Files for Import

    $filePath = "c:\SUG\"

    $SUGGroupFileName = $SUGName -replace '[^a-zA-Z0-9]', ''

    $XMLFile = "$SUGGroupFileName.xml"
    $XMLResourceFile = "$SUGGroupFileName.resx"
    $CabinetFile = "$SUGGroupFileName.cab"

    $XMLFilePath = Join-Path $filePath $XMLFile
    $XMLResourceFilePath = Join-Path $filePath $XMLResourceFile
    $CabinetFilePath = Join-Path $filePath $CabinetFile

    $DGMSCCMWMISUGConfigurationBaselineXML | Out-File -FilePath $XMLFilePath
    $DGMSCCMSUGXMLResource | Out-File -FilePath $XMLResourceFilePath

    New-CabinetFile -Name $CabinetFile -File $XMLFilePath,$XMLResourceFilePath -DestinationPath $filePath

    #Import-DGMSCCMWMISUGConfigurationBaselineXML -ProviderMachineName $ProviderMachineName -siteCode $siteCode -DGMSCCMSUGXMLResource $DGMSCCMSUGXMLResource -SUGConfigurationBaselineXML $DGMSCCMWMISUGConfigurationBaselineXML -SUGConfigurationBaselineDetails $DGMSCCMWMISUGConfigurationBaselineDetails
    
    # Set the current location to be the site code and import the baseline
    Set-Location "$($SiteCode):\"
    Import-CMBaseline -FileName $CabinetFilePath -Force

    $XMLFile
    $XMLResourceFile
    $CabinetFile
}


New-DGMSCCMSUGBaseline -ProviderMachineName SCCMSERVER001 -siteCode XXX -SUGName "Software Upgrade Group Name" -fileSavePath "C:\users\you\Desktop\"

Overview

I developed this tool, Run-DGMFireEyeHXCompliance.psm1, to test and confirm a FireEye Endpoint Security (HX) rollout in a corporate environment. Additionally, at the end of this document I have provided you with a FireEye HX Deployment Strategy approach for your corporate environment.

For some background, FireEye Endpoint Security (HX) is an Endpoint Forensics product provided by FireEye and is part of the Endpoint Security (HX Series) of 5th Generation Appliances. It is an endpoint security tool to help an organization monitor indicators of compromise (IOC) on endpoints and respond to cyber-attacks on the endpoint before data loss might occur.

DGMFireEyeHXCompliance Tool

The DGMFireEyeHXCompliance tool remotely invokes a test routine to gather data from an HX endpoint perspective and is intended to be deployed after an HX cloud rollout. Once deployed, it performs the following tasks on your HX endpoint computers:

  • Egress access on port 443 is open to the FireEye HX Cloud Connector
  • Egress access on port 80 is open to the FireEye HX Cloud Connector
  • The FireEye HX xagt service is able to start properly.
  • After a specified wait time, the xagt service is continuing to run. This is to test for incompatibilities with the service on certain workstations.
  • Start a Microsoft Message Analyzer packet capture (Primary Event Trace Log (ETL)) for a specific period of time, restarting the FireEye HX xagt service at the moment of the Primary Event Trace Log (ETL), if stopped.
  • Stop and save Primary Event Trace Log (ETL) results along with packet capture help cab files.
  • Output any Web Proxy PAC configuration file URLs, if configured
  • Generate a FireEye HX troubleshooting log file and save
  • Combine packet capture results, packet capture assist files and xagt troubleshooting log files to a central location of your choosing.
  • Continue testing of the next FireEye HX endpoint computer supplied in the array

The tool runs the test results on each HX endpoint, and aggregates the following results centrally:

  • Test information as shown above during the test routine
  • Microsoft Message Analyzer CAB File (containing many Network Analysis Results inside)
  • Microsoft Message Analyzer Packet Capture Session File (Primary Event Trace Log (ETL))
  • xagt Service Troubleshooting Log

Downloading the DGMFireEyeHXCompliance Tool

To download the tool, download and extract the module below. Inside you will find the single invocation and application module Run-DGMFireEyeHXCompliance.psm1.

Running the DGMFireEyeHXCompliance Tool

Before running the tool, update the values within the Invoke-DGMFireEyeHXCompliance function at the end of the script. First, change the save path you would like the logs and trace results saved. This is the local path on the computer that will initiate the script:


$LocalPath = "c:\Run-DGMFireEyeHXCompliance\" 

Next, indicate where you would like the results saved on the HX endpoint computers. These results will be removed automatically once aggregated to the source computer:


$RemotePath = "c:\temp\"

Finally, update the $HXEndpointComputers array to indicate the list of HX endpoint computers you would like to run the test against. You can also pass an array of computers to the application through the pipeline.


$HXEndpointComputers = @("DGM-SITE01-TST","DGM-SITE02-TST","VDGMDFS005VER","VDGMDFS006VER","WORKSTATION001","WORKSTATION002","WORKSTATION003")

Additionally, you will need to update the FireEye HX Cloud Connector address to match the connector in your environment, and additionally any ports you may wish to test. You can find this line at the bottom of the Run-DGMFireEyeHXCompliance function:


Get-DGMTCPPortOpenResults -tcpports 80,443 -destinationAddress "dgmtest-hxdmz-agent-1.helix.apps.fireeye.com"

To run the application, load the module Run-DGMFireEyeHXCompliance.psm1 in PowerShell ISE and run the invocation function. ***Make sure the PowerShell session is run with an account that has full administrative permissions to each of the endpoint computers*** Failure to do this, will produce errros when attempting to invoke a remote PowerShell session or collect test results.


Invoke-DGMFireEyeHXCompliance

As the tool runs, each of the HX endpoints will loop through the test items, aggregating results to the central source.

Understanding DGMFireEyeHXCompliance Test Results

Once the tool has completed, first review the output as it contains some test results that can only be viewed within the console. A sample output set is below. We can see in these results the Cloud Connector can be accessed on 80 and 443, but also we see the FireEye HX xagt service has abruptly stopped after starting it and waiting 8 seconds to see if it is still running. This is a legitimate issue that you may notice on Virtual Machines, and subsequent analysis of packet trace data and xagt log results are necessary.

The tool outputs the log and packet capture results to the directory you choose, and separates the results by sub directory of the HX endpoints you specified within the $HXEndpointComputers variable:

FireEye Packet Trace Analysis with DGMFireEyeHXCompliance

Within each directory, you will find the Microsoft Message Analyzer CAB File, the Microsoft Message Analyzer Packet Capture Session File, and the xagt Service Troubleshooting Log:

To view the Microsoft Message Analyzer Packet Capture Session File, download Microsoft Message Analyzer from https://www.microsoft.com/en-us/download/confirmation.aspx?id=44226. Open the Primary Event Trace Log (ETL) File in Microsoft Message Analyzer. You can opt to filter by TCP, HTTPS, etc to look into possible network issues between the HX endpoint and the cloud connector.

FireEye Advanced Network Analysis with DGMFireEyeHXCompliance

To view the Microsoft Message Analyzer CAB File, extract the CAB contents. Once extracted, you will notice several files, including another copy of the Primary Event Trace Log (ETL) file.

Below are descriptions of each of the files. Use these to diagnose network related issues.

General Configuration

OS Information

osinfo.txt

Credential Providers

allcred.reg.txt

Network Configuration

Environment Information

envinfo.txt

Adapter Information

adapterinfo.txt

DNS Information

dns.txt

Neighbor Information

neighbors.txt

Wireless Configuration

WLAN Auto-Config Eventlog

WLANAutoConfigLog.evtx

Windows Connect Now

WCN Information

WCNInfo.txt

Windows Firewall

Windows Firewall Configuration

WindowsFirewallConfig.txt

Windows Firewall Effective Rules

WindowsFirewallEffectiveRules.txt

Connection Security Eventlog

WindowsFirewallConsecLog.evtx

Connection Security Eventlog (Verbose)

WindowsFirewallConsecLogVerbose.evtx

Firewall Eventlog

WindowsFirewallLog.evtx

Firewall Eventlog (Verbose)

WindowsFirewallLogVerbose.evtx

Winsock Configuration

Winsock LSP Catalog

WinsockCatalog.txt

File Sharing

File Sharing Configuration

filesharing.txt

Registry Key Dumps

Credential Providers

AllCred.reg.txt

Credential Provider Filters

AllCredFilter.reg.txt

API Permissions

APIPerm.reg.txt

WlanSvc HKLM Dump

HKLMWlanSvc.reg.txt

WinLogon Notification Subscribers

Notif.reg.txt

Network Profiles

NetworkProfiles.reg.txt

Trace Files

Primary Event Trace Log (ETL)

report.etl

FireEye XAGT Service Log Analysis with DGMFireEyeHXCompliance

Included with the compliance analysis is the xagt service troubleshooting log file. This was gathered by running C:\Program Files (x86)\FireEye\xagt\xagt.exe -g <logFilePath> locally on each HX endpoint.

Pay specific attention towards the end of the file, looking for possible certificate issues, etc. As we see in this example, there is an error verifying the CRL signature in the .\crypto\rsa\rsa_pk1 file.

FireEye HX Deployment Approach

Purpose

The purpose of this section is to help you define a deployment strategy and plan for a FireEye HX Cloud deployment in your corporate environment. This section is comprised of two sections: the Deployment Strategy and the Deployment Plan. The Deployment Strategy section is used to formulate a deployment approach for FireEye HX Cloud (xAgt 26.21.8). The Deployment Plan section contains detailed schedule recommendations, resource, technical, and support information necessary for a successful deployment of the FireEye HX Cloud (xAgt 26.21.8).

RECOMMENDED SETTINGS For Deployment

The following are recommended configurations for your JSON file version 26.21.8. Note a deployment of the xAgent installation will either update the previous version of the xAgent or install the new version of xAgent if it does not already contain it.

  • Server: URL of FireEye HX Cloud Connector
  • Active Collection Enabled: False
  • Production Exploit Detection Enabled: False
  • FIPS Enabled: False
  • Configuration Poll Interval: 900 Seconds
  • CPU Limit: 50%
  • Protection Services Enabled: True
  • Age to Purge Protection Services: 90 Days

Deployment Strategy

The Deployment Strategy section of this article provides an overview of the deployment strategy you should plan for a FireEye HX Cloud (xAgt 26.21.8) rollout. Included in the deployment strategy is suggested timeline information, a description of the deployment approach, and associated benefits, assumptions and risks.

Deployment Overview

The Deployment Date’s referenced below are the date FireEye HX Cloud (xAgt 26.21.8) could potentially attempt to begin installation on your selected HX endpoint computers. Because the installation of the MSI was designed to be able to occur outside your maintenance window, installation during this phase could be scheduled to occur during any time during your chosen deployment deployment date.

Phases

Sites

Computers

Scheduled Dates

PRE-PILOT

Select Workstations and Servers

10

February 26, 2020 – February 27, 2020

PILOT

Pilot Workstation/Server Group

100

February 28, 2020 – March 02, 2020

PRODUCTION 1A WORKSTATIONS

Production Workstations Group

1,000

TBD Based on Pilot and Pre-Pilot Results

PRODUCTION 1B SERVERS

Production Server Group

400

TBD Based on Pilot and Pre-Pilot Results

Exclusions to Upgrade

The following types of systems should not be targeted within the scope of the project:

  • Citrix CEND Computers (but not CAU or CVU)
  • Domain Controllers (ADC)
  • Domain Joined Appliances
  • Virtual CNOs
  • Systems with no SCCM client

Deployment Plan

Deployment Approach

System Center Configuration Manager (SCCM) should be used to deploy FireEye HX Cloud (xAgt 26.21.8). When each phase is approached, the computers should be instructed to execute the installation in Parallel outside of the maintenance window. Create an SCCM application and deploy it as a required application to your HX endpoints as I have here.

Assumptions and Risks

Assumptions

The workstations and servers targeted for deployment are assumed to be left on and connected to your corporate network during their respective phase window.

Risks

During a deployment of the 24.9.0 version of the xAgent deployment, you may run in to a few issues where processor utilization can heighten during an xAgent scan.

Engagement and Promotion Strategy

During each deployment phase a corporate email should be sent to communicate the associated deployment phases. Members in your teams may choose to notify specific application owners if they feel the need.

Testing Methods and Monitoring

In the event an issue is determined, a rollback to a previous version should be deployed through the uninstall command on the application. Additional information should be available once the Pre-Pilot and Pilot phase of your rollout to xAgent 26.21.8 have completed.

Monitoring The Deployment

Basic Monitoring

Central monitoring of the FireEye HX Cloud (xAgt 26.21.8) rollout can be viewed from your computer by visiting http://sccmserver/Reports/ and searching for the report ‘All application deployments (basic)’.

Choose By: Application

Select Application:

  • FireEye HX Cloud (xAgt 26.21.8) choose your xAgt deployment

Select Collection (Application): All

Clicking the “View Current” data for the phase will allow you to further drill down, even to the computer and user level if necessary:

The SCCM monitoring should be setup by determining your MSI product code associated to the FireEye HX Cloud (xAgt 26.21.8) MSI installer exists on the machine. In this example, the product code is:

{CB3A0A18-EA4B-45AA-801B-AAA9D00CABE5}

Critical-Need Endpoint Strategy

I developed an additional solution that can allow you to start or stop the Mandiant xAgt on any set of your HX endpoints if they are a critical-need system where you do not want the agent running during a certain time span.

Script 1: A small program that takes a list of your HX endpoint critical-need computers as input, and pauses the Mandiant agent on these computers.

Script 2: Another small program that resumes the Mandiant agent, again taking the same list as input.

The first script should be run on a schedule as a scheduled task. In this example, we choose 7AM, and then then the second program at 8PM. This creates a “Mandiant Agent is off during business hours and on at night” environment.

cid:image004.png@01D34BEE.C3FEED40

Technical Notes on Critical-Need Endpoint Computers Start / Stop Procedure

STOP / SET TO MANUAL THE xagt SERVICE ON Critical-Need MACHINES
  1. RDP into the server you want the scheduled task to run on
  2. Open PowerShell ISE -> C:\Scripts\XAGT_Service_Tasks\XAGT_Service_Tasks_Stop_Service.ps1
  3. Click Run (the Green Play Button)
START / SET TO AUTOMATIC THE xagt SERVICE ON Critical-Need MACHINES
  1. RDP into the server you want the scheduled task to run on
  2. Open PowerShell ISE -> C:\Scripts\XAGT_Service_Tasks\ XAGT_Service_Tasks_Start_Service.ps1
  3. Click Run (the Green Play Button)
MODIFYING LIST of Critical-Need MACHINES

Modify: C:\Scripts\XAGT_Service_Tasks\hostnames_PRODUCTION.txt

LOGS OF YOUR RESULTS

Starting Log: “C:\Scripts\XAGT_Service_Tasks\XAGT_Service_Tasks_START_DATE.log”

Stopping Log: “C:\Scripts\XAGT_Service_Tasks\XAGT_Service_Tasks_STOP_DATE.log”

Mandiant_xAgt_Tasks_Stop_Service.ps1

$path = "C:\Scripts\Mandiant_xAgr_Tasks"
$hostname = Get-Content "$path\hostnames_PRODUCTION.txt"
$service = "xagt"

foreach ($h in $hostname){

    try{
        (get-service -ComputerName $h -Name $service).Stop()
        set-service -ComputerName $h -Name $service -startuptype "manual"
        Write-Host "Success: Stopped and set to Manual xagt service on $h ..." -foregroundcolor green
        Write-Output "Success: $(Get-Date); Stopped and Set to Manual the $service service on $h" >> "$path\Mandiant_xAgt_Tasks_STOP_$(Get-Date -Format dd-MM-yyyy).log"
    }
    catch{
        Write-Host "Error: Could NOT Stop nor set to Manual xagt service on $h ..." -foregroundcolor red
        Write-Output "Error: $(Get-Date); Could NOT Stop nor set to Manual xagt service on $h" >> "$path\Mandiant_xAgt_Tasks_STOP_$(Get-Date -Format dd-MM-yyyy).log"
    }

}
Mandiant_xAgt_Tasks_Start_Service.ps1

$path = "C:\Scripts\Mandiant_xAgr_Tasks"
$hostname = Get-Content "$path\hostnames_PRODUCTION.txt"
$service = "xagt"

foreach ($h in $hostname){

    try{
        (get-service -ComputerName $h -Name $service).Start()
        set-service -ComputerName $h -Name $service -startuptype "Automatic"
        Write-Host "Success: Started and set to Automatic xagt service on $h ..." -foregroundcolor green
        Write-Output "Success: $(Get-Date); Started and Set to Automatic the $service service on $h" >> "$path\Mandiant_xAgt_Tasks_START_$(Get-Date -Format dd-MM-yyyy).log"
    }
    catch{
        Write-Host "Error: Could NOT Start nor set to manual xagt service on $h ..." -foregroundcolor red
        Write-Output "Error: $(Get-Date); Could NOT Start nor set to Automatic xagt service on $h" >> "$path\Mandiant_xAgt_Tasks_START_$(Get-Date -Format dd-MM-yyyy).log"
    }

}

Mandiant Endpoint xAgent Antivirus Exclusion Policies

In order to prevent potential conflicts with antivirus and/or host-based intrusion detection software, a series of files should be whitelisted using SCCM Antimalware exclusion policies. Below are example policies you can create

  • EP -SERVER – Mandiant Endpoint xAgent Antimalware Policy
  • EP -WORKSTATION – Mandiant Endpoint xAgent Antimalware

Collectively, the policies should target the workstations and servers that the Mandiant Endpoint xAgent Application should be deployed to. These policies need to include exclude the following files:

C:\Program Files (x86)\FireEye\xagt\audits.dll

C:\ProgramData\FireEye\xagt\events.db

C:\ProgramData\FireEye\xagt\events.db-wal

C:\Windows\System32\drivers\FeKern.sys

C:\ProgramData\FireEye\xagt\main.db

C:\Program Files (x86)\FireEye\xagt\mindexer.sys

C:\Windows\FireEye\NamespaceToEvents_xx.dll (note the xx is wildcard)

C:\Windows\FireEye\NamespaceToEvents32_xx.dll (note the xx is wildcard)

C:\Program Files (x86)\FireEye\xagt\xagt.exe

C:\ProgramData\FireEye\xagt\xlog.db

C:\ProgramData\FireEye\xagt\xlog.db-wal

The DGMFireEyeHXCompliance PowerShell Module


function Run-DGMFireEyeHXCompliance{
<#  
.SYNOPSIS  
    Test FireEye HX Compliance And Aggregaget Test Resullts   
.NOTES  
    File Name  : Run-DGMFireEyeHXCompliance.psm1  
    Author     : David Maiolo
    Version    : 2018-03-06
.LINK  
#>
 
    param(

        [Parameter(Position=0,Mandatory=$true,ValueFromPipeline=$true)]
        $HXEndpointComputers,
        [Parameter(Position=1,Mandatory=$true,ValueFromPipeline=$false)]
        $RemotePath,
        [Parameter(Position=2,Mandatory=$true,ValueFromPipeline=$false)]
        $LocalPath

    )

 
    #Create a script block to run on the remote machine to test for FireEye issues     
    $script = {
        #define local hostname
        $hostname = $env:COMPUTERNAME
        
        function Get-DGMTCPPortOpenResults{
            <#  
            .SYNOPSIS  
                Test TCP Port and Display Results  
            .NOTES 
                Author     : David Maiolo - david.maiolo@gmail.com  
            .LINK  
            #>
            param(
                $tcpports,
                $destinationAddress
            )
 
            $hostname = $env:COMPUTERNAME
 
            foreach ($port in $tcpports)
            {
       
                try{
                    $Socket = New-Object Net.Sockets.TcpClient      
                    $ErrorActionPreference = 'SilentlyContinue'
       
       
                    $Socket.Connect($destinationAddress, $port)
 
                    if ($Socket.Connected) {
                        Write-Host $hostname ": Outbound port $port is open to $destinationAddress." -ForegroundColor Green
                        $Socket.Close() 
                    }
                    else{
                        Write-Host $hostname ": Outbound port $port is closed or filtered to $destinationAddress." -ForegroundColor Red
                    }
 
                }catch{
                    Write-Host $hostname ": Could NOT open the socket for $port port, so result unknown." -foregroundcolor yellow
                }  
            }
        }
 
        function Start-DGMService{
            <#  
            .SYNOPSIS  
                Attempt to start a Service and Display Results  
            .NOTES 
                Author: David Maiolo - david.maiolo@gmail.com  
            .LINK  
            #>
            [CmdletBinding()]
            [OutputType([int])]
            Param
            (
                [Parameter(Mandatory=$true,ValueFromPipeline=$true, Position=0)]
                $service
            )
 
            $hostname = $env:COMPUTERNAME
            try{
                #(get-service -Name $service).Start()
                #set-service -Name $service -startuptype "Automatic" -ErrorAction Stop
                Start-Service $service -ErrorAction stop
                Write-Host $hostname ": Successfully started the $service service." -foregroundcolor green
            }
            catch{
                Write-Host $hostname ": Could NOT Start $service service:" ($error[0]) -foregroundcolor red
            }
        }
 
       function Test-DGMService{
           <#  
           .SYNOPSIS  
               Look to see if a service is running, stopped, etc and display results  
           .NOTES 
               Author: David Maiolo - david.maiolo@gmail.com  
           .LINK  
           #>
           [CmdletBinding()]
           [OutputType([int])]
           Param
           (
               [Parameter(Mandatory=$true,ValueFromPipeline=$true, Position=0)]
               $service
           )
 
           $hostname = $env:COMPUTERNAME
           try{
               $serviceStatus = (get-service -Name $service).Status
               if (($serviceStatus -eq "Running")){
                   Write-Host $hostname ": The $service service is currently $serviceStatus." -ForegroundColor Green
               }else{
                   Write-Host $hostname ": The $service service is currently $serviceStatus." -ForegroundColor Red
               }
           }
           catch{
               Write-Host $hostname ": The $service service could not be checked for current status. Result unknown." -foregroundcolor yellow
           }
       }

        function Get-DGMProxyPACFile{
        <#  
        .SYNOPSIS  
            Resturn proxy PAC file URL from registry entry  
        .NOTES 
            Author: David Maiolo - david.maiolo@gmail.com  
        .LINK  
        #>
 
            $hostname = $env:COMPUTERNAME
            try{
                $proxyPAC = Get-ItemProperty ('Registry::HKLM\SYSTEM\CurrentControlSet\Services\NlaSvc\Parameters\Internet\ManualProxies')
                $proxyPAC = $proxyPAC.'(default)'
                Write-Host $hostname ": The proxy PAC auto-config file is $proxyPAC."
            }
            catch{
                Write-Host $hostname ": Unable to determine the proxy PAC auto-config file." -foregroundcolor yellow
            }
        }

        function Get-DGMFireEyeLogFile{
            <#  
            .SYNOPSIS  
                Creates xagt log file  
            .NOTES 
                Author: David Maiolo - david.maiolo@gmail.com  
            .LINK  
            #>
            [CmdletBinding()]
            [OutputType([int])]
            Param
            (
            [string]$logFile
            )
 
            $hostname = $env:COMPUTERNAME
            try{

                $logFile = "$hostname`_xagt_$(get-date -f yyyy-MM-ddTHHmmss).log"
                $logFolder = "c:\temp\"
 
                #Create subdirectory if not present
                if (!(Test-Path $logFolder)) {
                    New-Item $logFolder -ItemType Directory > $null
                }

                $logFilePath = Join-Path $logFolder $logFile
 
                $command = 'cmd.exe /C "C:\Program Files (x86)\FireEye\xagt\xagt.exe" -g '+$logFilePath
 
                $expressionresult = Invoke-Expression -Command $command
                $logFileContent = Get-Content $logFilePath -ErrorAction Stop
                $lastTwoLinesOfLogFile = $logFileContent | select -Last 2
                Write-Host $hostname ": The XAGT log file was Successfully created $logFile. Last two lines of the file:" -ForegroundColor Green
                $lastTwoLinesOfLogFile
 
            }
            catch{
                Write-Host $hostname ": Unable to create the XAGT log file." $error[0] -foregroundcolor red
            }
        }


        function Get-DGMTraceResults{
            <#  
            .SYNOPSIS  
                Creates xagt log file  
            .NOTES 
                Author: David Maiolo - david.maiolo@gmail.com  
            .LINK  
            #>
            [CmdletBinding()]
            [OutputType([int])]
            Param
            (
            [string]$traceFile,
            [int]$traceTimeSeconds
            )
 
            $hostname = $env:COMPUTERNAME
            try{
 
                $traceFile = "$hostname`_trace_$(get-date -f yyyy-MM-ddTHHmmss).etl"
                $traceFolder = "c:\temp\"
 
                #Create subdirectory if not present
                if (!(Test-Path $traceFolder)) {
                    New-Item $traceFolder -ItemType Directory > $null
                }

                $traceFilePath = Join-Path $traceFolder $traceFile

                $traceTimeSeconds = 8
            
                #Start the Trace
                Write-Host $hostname ": Starting the trace process..."
                netsh trace start capture=yes tracefile=$traceFilePath

                #Restarting the xagt Service
                Start-DGMService -service xagt

                Write-Host $hostname ": Running trace for $traceTimeSeconds seconds..."
                sleep -Seconds $traceTimeSeconds

                #Stop The Trace
                netsh trace stop

                Write-Host $hostname ": The trace file was run for $traceTimeSeconds seconds and was Successfully created as $traceFile." -ForegroundColor Greenc
 
            }
            catch{
                Write-Host $hostname ": Unable to create the trace file of $traceTimeSeconds seconds." $error[0] -foregroundcolor red
            }
        }
    
 
        #Test FireEye Components Invocation
        Get-DGMTCPPortOpenResults -tcpports 80,443 -destinationAddress "dgmtest-hxdmz-agent-1.helix.apps.fireeye.com"
        Start-DGMService -service xagt
        Write-Host $hostname ": Waiting 8 seconds to see if service is still running..."
        sleep -Seconds 8
        Test-DGMService -service xagt
        Get-DGMTraceResults
        Get-DGMProxyPACFile
        Get-DGMFireEyeLogFile

    }#End Script Block
 
    #Get-Credentials to run script block as
    #$cred = Get-Credential -Message "Enter Credentials With Permissions to Start/Stop xagt Service and Test Ports"
 
    #Invoke the script remotely on each of the computers
    foreach ($computer in $HXEndpointComputers){
        Write-Host "Processing $computer..." -ForegroundColor Cyan
        if (Test-Connection $computer -Count 1 -ErrorAction SilentlyContinue){
            #Invoke the script block on the remote computer
            Invoke-Command -ComputerName $computer -ScriptBlock $script

            #Copy generated log files and scripts
            try{
                Write-Host $computer ": Copying generated logs and traces to $LocalPath..."
                $LocalPathSpecific = Join-Path $LocalPath $computer

                #Create subdirectory if not present
                if (!(Test-Path $LocalPathSpecific)) {
                    New-Item $LocalPathSpecific -ItemType Directory > $null
                }

                $RemotePathUNC = $RemotePath -replace "c:", "\\$computer\c$"
                $LocalPathSpecific = $LocalPathSpecific+"\"

                Copy-Item -LiteralPath $RemotePathUNC -Destination $LocalPathSpecific -Recurse -Force

            }catch{
                Write-Host $computer ": Could not copy the log files."$error[0] -ForegroundColor Red
            }

            
            #Convert .etl file to .cap
            try{
                Write-Host $computer ": converting .etl trace files to .cap files in $LocalPathSpecific...."
                
                $Files =  Get-ChildItem -Recurse $LocalPathSpecific | ? {$_ -like "*.etl"}
 
                $Count = 0
                foreach ( $File in $Files ) {
 
                    $Count++
                    Write-Host $computer ':(' $Count 'of' $Files.Count ') Generating Wireshark file for' $File
 
                    $CAPFile = $File -replace ".etl",".cap"

                    $s = New-PefTraceSession -Path $CAPFile -SaveOnStop
                    $s | Add-PefMessageProvider -Provider $File
                    $s | Start-PefTraceSession
                }
                Write-Host $computer ": Successfully converted the .etl trace files to .cap files." -ForegroundColor Green
            }catch{
                Write-Host $computer ": Could not convert the .etl trace files to .cap files. You may need to install Microsoft Message Analyzer on the computer you are running this script from (https://www.microsoft.com/en-us/download/details.aspx?id=44226)."$error[0] -ForegroundColor Red
            }

            #Cleanup Files on Remote Computer
            try{
                Remove-Item -Path $RemotePathUNC -Recurse
                Write-Host $computer ": Successfully cleaned up the files on $RemotePathUNC." -ForegroundColor Green
            }catch{
                Write-Host $computer ": Could not cleanup the files on $RemotePathUNC."$error[0] -ForegroundColor Red
            }


        }else{
            Write-Host $computer ": The computer is offline." -foregroundcolor yellow
        }
    }
    Write-Host "Run-DGMFireEyeHXCompliance Complete. Please use Microsoft Message Analyzer to view the capture files https://www.microsoft.com/en-us/download/details.aspx?id=44226" -ForegroundColor Magenta

}

function Invoke-DGMFireEyeHXCompliance{
    $HXEndpointComputers = @("DGM-SITE01-TST","DGM-SITE02-TST","VDGMDFS005VER","VDGMDFS006VER","WORKSTATION001","WORKSTATION002","WORKSTATION003") 
    $RemotePath = "c:\temp\"
    $LocalPath = "c:\Run-DGMFireEyeHXCompliance\"

    Run-DGMFireEyeHXCompliance -computers $HXEndpointComputers -RemotePath $RemotePath -localPath $LocalPath
}