For a clearer picture

Tag: PowerShell Monitor

Part 3 – PowerShell Monitors – Monitor Type

The monitor type is where we define the workflows. I’ll run through creating a 2 state monitor where we have a workflow for “over threshold” and a workflow for “under threshold” as well as look at on-demand monitors \ workflows.

When you target a single instance class, be aware that when you have a 2 state monitor then both workflows will execute. Likewise, with a 3 state monitor you will have 3 workflows executing. If you have a multi-instance target then you will get 2 (or 3) workflows per instance. So you can quickly see that you can start to have a major impact on performance of the system being monitored which is why cookdown is important.

Right click on the folder FolderMonitoringCountFoldersInFolder and select Add, New Item, Empty Management Pack Fragment

This is the complete code that needs to copied and pasted between the tags.

Let’s look at it in some more detail.

First off, this is a 2 state monitor and we define the monitor type states. I’m going to have 2 – one for the value for number of folders less than or equal to the threshold and one for when the number of folders is greater than the threshold.

We then need to define the configuration parameters.

  • Interval Seconds and Sync Time are for our scheduler
  • From, To and Days allow us to set business hours using a schedule filter
  • Match Count allows us to only trigger a health state change (and trigger an alert) after a consecutive number of failures
  • Folder Path is the path to our “top level” folder
  • And threshold is the maximum number of folders are considered healthy. Any more than this and we want to trigger a health state change –> alert

The next section we want to set which parameters are available as overrides.

Within our modules, you will see a data source entry which triggers the data source we created in part 2 – hence it is passing over the parameters for interval seconds, sync time, folder path and threshold

But what I also have included is a separate probe – this will be used to trigger on demand detection which will allow the “Recalculate Health” button in Health Explorer to actually work. It doesn’t need an interval seconds or sync time configuration as it will be triggered on demand to execute on demand.

We then come on to our first condition detection which is to detect when the number of folders in our top level folder is less than or equal to the threshold

And then our next condition detection to detect when the number of folders in our top level folder is greater than our threshold. You’ll see here that I have included to suppress on MatchCount – so we can configure the workflow to only trigger a health state change after x consecutive occurrences. I didn’t set any suppression on the healthy workflow. That is because I would like the monitor to reset to healthy as soon as possible.

What I have done next is to configure a condition detection which will set a filter which will only pass data “on schedule”

We then have a section for regular detections; these are the “normal monitors”. You’ll see that we have 2 workflows defined. One to detect “less than or equal to threshold” and the other to detect “over threshold”. What I have also done is to add the filter to the over threshold workflow. This means that outside of my configured hours the workflow will filter data and not allow it to pass (and so won’t trigger an unhealthy state). My healthy (less than or equal to threshold) workflow doesn’t have this filter. I want the health state to go healthy 24x7x365. This is a matter of personal taste. Some authors will put the filter onto the data source that we looked at in part 2 so that the monitor doesn’t even execute which means that the monitor will stay unhealthy out of hours even if the monitored value drops below threshold, It all depends on what you are looking to achieve.

The final piece is the On Demand detections which enable the Recalculate Health button in Health Explorer to actually do something useful rather than just being eye candy.

Part 1 – PowerShell Monitors – The Script

This question came up on the TechNet forums and it raised some interesting questions about using PowerShell in a monitor.

Some standard best practice I would look to follow would be:
1. The script should always have some level of error checking along the lines of Try \ Catch \ Finally.

2. I like to include a debug section so that I can set an override to enable debugging which will then output more detailed information when the script is run.

3. I like my PowerShell script just to collect data. I don’t want to be evaluating health states as this hard codes the details into the scripts and doesn’t expose them as overrides. It is much more convenient to do the evaluation of health states in the condition detection and subsequent write action.

I’ll take this in stages with this stage to include the PowerShell script and then how to update it to work within SCOM.

PowerShell script to count the number of folders in a specified folder

So let’s take a look at a PowerShell script to count the number of folders in a folder. I’ve not done a recursive search here but that could be easily achieved with some due care and consideration to the additional performance impact it will have.

Here is the code – https://raw.githubusercontent.com/f1point2/PowerShell/master/Folders/folderCount.ps1

Update the PowerShell script to run in SCOM

To make this usable in SCOM, we need to update it.

Here is the code – https://raw.githubusercontent.com/f1point2/PowerShell/master/Folders/folderCountSCOM.ps1

We instantiate the MOM.ScriptAPI object

And create a Property Bag for which to pass property values back to SCOM.

If you run this on a computer without the SCOM agent installed it will fail. Then again, if you try and run this on a machine with a SCOM agent installed then you won’t get a useful output.

To get some useful output to the screen; what you need to do is comment out the $Bag in the Finally section and replace it with $API.Return($Bag) – this is the way we return VBScript.


# $Bag
$API.Return($Bag)

This provides us with XML output from which we can verify that the script is working as expected.

Once we are happy with the script; make sure to remove the $API.Return($Bag) command and remove the comment from $Bag.