For a clearer picture

Category: Uncategorized

Part 3 – Lessons Learned

Part 3 – Lessons Learned

While the migration script works; there are some important caveats.

1. There is no alerting from Azure Application Insights by default. So if you have configured alerting in GSM; you’ll need to configure them manually in Azure Application Insights. You’ll also need to look at creating action groups and having a new notification workflow (unless you implement the Azure Management Pack which is the next post in the series).

2. The monitoring parameters available are significantly less than was available in Global Service Monitor:

  • The content match is only for contains
  • The return code can only checks of a specific return code so our greater than 400 configuration in GSM cannot be replicated in Application Insights. I guess we just look for a 200 and anything else is an issue
  • GSM has a list of performance metrics that you can collect; Azure Application Insights has just response time.

Azure Application Insights is without doubt a powerful solution when used as part of monitoring web applications but as a pure URL monitor (simple ping test) you need to think whether what it provides is worth the effort of deployment. If you are already an Azure customer using Application Insights then it is a no brainer. If you don’t have an Azure presence then you probably already have another solution that you can use via your Cloud provider. If you are an Azure customer who doesn’t use Application Insights yet then the migration brings less functionality than GSM and requires more administrative overhead to configure; especially if you want the alerts into SCOM via the Azure Management Pack. It is for each individual to decide what works best for them.

I have a set of PowerShell scripts that I can run from on-premises watcher nodes in SCOM to gather and alert on a lot more useful data in a much easier way than Application Insights provides. I’ll take on board that my tests are from internal \ on-premises servers so we miss the outside-in monitoring perspective but I’ll live with that constraint.

It is an interesting strategy from Microsoft. It is not just trying to entice customers to Azure; it is a rather crude kick to try to try and force Azure take up and consumption. The short timeline isn’t customer friendly as it doesn’t give much time for customers to consider alternatives. But that I guess is part of Microsofts strategy.

Part 6 – The Recovery

And finally we have a recovery which will allow the operator to delete the folders in the top level folder.

This is the PowerShell code

And this is the recovery code

This shows the key piece of passing data from the monitor to the recovery.

Part 2 – PowerShell Monitors – The Modules

Now that we have our script and we are confident that it works, we need to start the process of getting this into SCOM. There are 4 modules types that we will need to put together which are discussed here.

They are:

  1. A probe action which will be our script
  2. A composite data source which will consist of:
    • A data source (scheduler) which will provide the mechanism for determining how often our script will run
    • The probe action (the script)
The PowerShell script

I have created the following folder structure to provide a framework for this walk through. I wouldn’t do it this way in general but it does provide a step by step guide through the process.

Then, right click on the folder FolderMonitoringCountFoldersInFolder and select Add, New Item, PowerShell script file

Name it 1_PScript1.ps1 and click Add

Copy and paste the PowerShell script into the window and save

The Probe

Right click on the folder FolderMonitoringCountFoldersInFolder and select Add, New Item, Empty Management Pack Fragment and call it “_Probe. Copy and paste the following code between the tags.

The key part of the code is as follows:

We are running the PowerShell script and passing in 2 parameters. These are the folder path (top level folder) and also the threshold. The only reason I am passing the threshold in as a parameter is so that the script can output it as a property bag so that we can use it in alert description.

The Composite Data Source

Right click on the folder FolderMonitoringCountFoldersInFolder and select Add, New Item, Empty Management Pack Fragment and call it 3_DS. Copy and paste the following code

Part 1 – PowerShell Monitors – The Script

This question came up on the TechNet forums and it raised some interesting questions about using PowerShell in a monitor.

Some standard best practice I would look to follow would be:
1. The script should always have some level of error checking along the lines of Try \ Catch \ Finally.

2. I like to include a debug section so that I can set an override to enable debugging which will then output more detailed information when the script is run.

3. I like my PowerShell script just to collect data. I don’t want to be evaluating health states as this hard codes the details into the scripts and doesn’t expose them as overrides. It is much more convenient to do the evaluation of health states in the condition detection and subsequent write action.

I’ll take this in stages with this stage to include the PowerShell script and then how to update it to work within SCOM.

PowerShell script to count the number of folders in a specified folder

So let’s take a look at a PowerShell script to count the number of folders in a folder. I’ve not done a recursive search here but that could be easily achieved with some due care and consideration to the additional performance impact it will have.

Here is the code – https://raw.githubusercontent.com/f1point2/PowerShell/master/Folders/folderCount.ps1

Update the PowerShell script to run in SCOM

To make this usable in SCOM, we need to update it.

Here is the code – https://raw.githubusercontent.com/f1point2/PowerShell/master/Folders/folderCountSCOM.ps1

We instantiate the MOM.ScriptAPI object

And create a Property Bag for which to pass property values back to SCOM.

If you run this on a computer without the SCOM agent installed it will fail. Then again, if you try and run this on a machine with a SCOM agent installed then you won’t get a useful output.

To get some useful output to the screen; what you need to do is comment out the $Bag in the Finally section and replace it with $API.Return($Bag) – this is the way we return VBScript.


# $Bag
$API.Return($Bag)

This provides us with XML output from which we can verify that the script is working as expected.

Once we are happy with the script; make sure to remove the $API.Return($Bag) command and remove the comment from $Bag.