PowerShell for Windows Admins


July 2, 2014  1:20 PM

CIM or WMI?

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell, WMI

Working with WMI became a whole easier when PowerShell came on the scene. If you ever spent hours typing all of the Echo commands that were required with VBScript to produce output you’ll be aware of what I mean.  There are still a few awkward areas in the WMI cmdlets. One of the most awkward is date/time handling.

Consider looking for the last boot up time of your favourite computer:

£> Get-WmiObject -Class Win32_OperatingSystem | select -ExpandProperty LastBootUpTime
20140702083855.487133+060

 

That breaks down as year (2014), month (07), day (02), hour (08), minute (38), second (55) and fraction of a second (.487133). The +060 denotes the minutes offset from GMT (UTC if you prefer) – I’m in the UK on daylight saving time.  So, once you know how to read it the answer is understandable but not easy to work with.

The PowerShell team introduced a method on Get-WmiObject that will convert the date to a more understandable format:

 

£> $os = Get-WmiObject -Class Win32_OperatingSystem
£> $os.ConvertToDateTime($os.LastBootUpTime)

02 July 2014 08:38:55

 

You can also use the method in select-object or the format cmdlets by using a calculated field:

£> Get-WmiObject -Class Win32_OperatingSystem | Format-List PSComputerName, Caption, @{N=’BootTime’; E={$_.ConvertToDate
Time($_.LastBootUpTime)}}
PSComputerName : RSSURFACEPRO2
Caption        : Microsoft Windows 8.1 Pro
BootTime       : 02/07/2014 08:38:55

 

There is an easier way – use the CIM cmdlets:

£> Get-CimInstance -ClassName Win32_OperatingSystem | select -ExpandProperty LastBootUpTime

02 July 2014 08:38:55

 

The automatic date conversion is more than sufficient incentive for me to use Get-CimInstance in preference to Get-WmiObject.

July 1, 2014  2:32 PM

PowerShell Summit Europe 2014 – - update 3

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell

Registration for the Summit opens in 14 days – 15th July

If you need some help selling the idea of attending to your boss cheeck out this post http://richardspowershellblog.wordpress.com/2014/06/21/powershell-summit-europe-2014-reasons-to-attend-1-7/


June 30, 2014  12:55 PM

Workflows 6: suspending jobs

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell, Workflow

One of the great things about work flows is that you can stop and start them. A workflow can be stopped, on a temporary basis, by using the Suspend-Workflow activity.

workflow suspend1 {
Get-Service

Suspend-Workflow

Get-Process

}
suspend1

 

This will run the Get-Service activity – and produce output to the console. The workflow will suspend and automatically create a job. You will see output like this:

HasMoreData     : True
StatusMessage   :
Location        : localhost
StartParameters : {}
Command         : suspend1
JobStateInfo    : Suspended
Finished        : System.Threading.ManualResetEvent
InstanceId      : bbe0903d-1720-46da-a6dd-e0a927aa9e11
Id              : 8
Name            : Job8
ChildJobs       : {Job9}
PSBeginTime     : 30/06/2014 19:40:29
PSEndTime       : 30/06/2014 19:40:29
PSJobTypeName   : PSWorkflowJob
Output          : {}
Error           : {}
Progress        : {}
Verbose         : {}
Debug           : {}
Warning         : {}
State           : Suspended

 

Notice the state.

You can  manage the job with the standard job cmdlets

£> Get-Job

Id Name PSJobTypeName State     HasMoreData Location  Command
– —- ————- —–     ———– ——–  ——-
8  Job8 PSWorkflowJob Suspended True        localhost suspend1

 

The job is restarted using Resume-Job. Once the job has finished you can use Receive-Job to get the rest of the data.


June 27, 2014  2:02 PM

Expanding server names

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell

I had a comment left on my recent post – “Bad practices – making scripts needlessly interactive” asking how to deal with the situation of N servers consecutively numbered e.g. server01 to server05. Taking the script from that post as an example:

 

[CmdletBinding()]
param (
[string[]]$computername
)

foreach ($computer in $computername){
Restart-Computer -ComputerName $computer
}

 

the questioner wanted to be able to do something like

./reboot –computername server0[1-5]

which would have the same effect as typing

./reboot –computername server01, server02, server03, server04, server05

 

There are two approaches that come to mind. The first approach would be to put some code to perform the expansion into the script and use parameter sets to differentiate between a set of full names or a group of names to expand.  This would give you something like this:

[CmdletBinding()]
param (

[parameter(ParameterSetName="Names")]
[string[]]$computername,

[parameter(ParameterSetName="Group")]
[string]$computergroup

)

switch ($psCmdlet.ParameterSetName) {
“Names”  {
foreach ($computer in $computername){
Restart-Computer -ComputerName $computer
}
}
“Group”  {
$groupdata = $computergroup.Replace(“]”, “”) -split “\["
$prefix = $groupdata[0]
$nums = $groupdata[1] -split “-”

$nums[0]..$nums[1] |
foreach {
Restart-Computer -ComputerName “$prefix$_”
}

}
default {Write-Error “Error!!! Should not be here” }
}

 

Two parameters are defined. The computername parameter takes one or more computer names. The computergroup parameter takes some kind of shorthand notation to describe a group of consecutively named machines such as server0[1-5]. Pameter sets are used to make the parameters mutually exclusive .

Processing is based on a switch determined by parameter set name.

For a set of computer names the processing is as previously.

For a shorthand notation the data is split to provide the prefix – ‘server0’  in this case.

The numbers are then split on the “-“ to effectively give a first and last which are used in a range operator and passed through foreach-object to the restart-computer cmdlet where the computer name is created.

This approach works but it has one major drawback. You would need to put the code to expand the server names into each and every script or function you wrote. That’s extra work we can avoid.

The essence of PowerShell cmdlets is that they are a small piece of code that perform a discrete job. If we follow that pattern we can split the expansion of the server names out into a reusable piece of code that then passes computer names to our other scripts or functions. I would make the expand code a function so that you can load it through a module or your profile.

function expand-servername {
[CmdletBinding()]
param (

[string]$computergroup

)
$computers = @()

$groupdata = $computergroup.Replace(“]”, “”) -split “\["
$prefix = $groupdata[0]
$nums = $groupdata[1] -split “-”

$nums[0]..$nums[1] |
foreach {
$computers += “$prefix$_”
}

Write-Output $computers
}

 

using the function generates the server names:

£> . .\expand-servername.ps1
£> expand-servername -computergroup server0[1-5]
server01
server02
server03
server04
server05

 

You can use the reboot script like this:

.\reboot.ps1 -computername (expand-servername -computergroup server0[1-5])

 

If I was doing this for my production environment I would make the reboot script into an advanced function that accepted pipeline input. The reboot and expand-servername functions would be part of a module that could auto load. Alternatively. make expand-servername part of a module that you auto load. You could expand the options in expand-servername to accomodate multiple patterns of names.

 


June 26, 2014  11:34 AM

Workflows: 5a CDXML modules update

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell, Workflow

In my last post I questioned why commands from CDXML didn’t fail as thee weren’t any activities defined for them.  Turns out that functions and other commands that don’t explicitly have their own workflow activities are implicitly wrapped in the inline script activity.  As CDXML modules effectively create a function – that means that workflows accept them and run.

Thanks to Steve Murawski for the answer.


June 24, 2014  2:11 PM

Workflows: 5 CDXML modules

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell, Workflow

Last time we saw that you’re not really using cmdlets in PowerShell workflows – you’re using workflow activities. Some cmdlets haven’t been packaged into activities and for those you need to put them in an Inlinescript block.  You can also use an Inlinescript block to run any arbitrary piece of PowerShell.

One thing I hadn’t tried was using some of the CDXML (WMI class wrapped in XML and published as a PowerShell module) modules that ship in Windows 8 and later. As far as I was aware they hadn’t been packaged as activities -

so thought I’d try this

workflow net1 {
parallel {
Get-NetAdapter
Get-NetIPAddress
}
}
net1

 

Surprisingly it worked.

The only reason I can think of that it works is that the way CDXML publishes the cmdlets as functions -  ls function:\Get-NetAdapter | fl *

Enables a workflow to use the function. Even if the module containing the cmdlet hasn’t been explicitly or implicitly loaded the workflow still runs.

You can tell these cmdlets haven’t been packaged as activities as the activity common parameters aren’t available on them.

One of life’s little mysteries. I’ll get to the bottom of it and find out why eventually. In the meantime our workflows just became a whole lot richer in terms of functionality.


June 23, 2014  11:24 AM

Bad practices – making scripts needlessly interactive

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell

the PowerShell community spends a lot of time talking about best practices when using PowerShell. I’m not convinced this approach is working as we keep seeing the same bad practices coming through on forum questions. I thought I’d turn the problem on its head and present a now-and-again series of bad practices. These are things I’ve seen people doing that either make their script far more complicated than it needs to be or just completely negates the power that PowerShell brings to your  daily admin tasks.

I think that a lot of this is due to people not taking the time to learn the best way to use PowerShell. The thinking goes something like this  – I can script with XXX language.  PowerShell is a scripting language. Therefore I know how to use PowerShell.

NO. WRONG.

PowerShell should be thought of more as an automation engine then a scripting language. The reason for using PowerShell is to perform tasks that are one or more of repetitive, boring, long, complicated, error prone, can’t be done in the GUI. The goal of automating a task should be that you can run it automatically without human intervention if you need to.

I’ll give an example.

Some years ago I was involved in migrating a new customer into the managed service environment of the company I was working for. Step one was to move them out of their previous supplier’s data centre. As part of that move I wrote a PowerShell script that shutdown every machine in that environment. The script took a list of computers from a file and shut them down in order. My final action was to shutdown the machine I’d used to run the script.

I actually ran the script manually but I could have run it as a scheduled job – which I did another time.

What do I mean by making scripts needlessly interactive?

Consider this example:

 

$hs = @”
Press 1 to reboot computera
Press 2 to reboot compuertb
Press 3 to reboot computera and computerb

“@

$x = Read-Host -Prompt $hs

switch ($x) {
1  {Restart-Computer -ComputerName $computera; break}
2  {Restart-Computer -ComputerName $computerb; break}
3  {
Restart-Computer -ComputerName $computera
Restart-Computer -ComputerName $computerb
break
}
}

 

The script defines the prompt – a menu in effect. Gets the user choice and performs the required reboots. It works BUT its so wasteful in terms of the effort required.  Even worse its not sustainable. When you add another computer your choices become a, b, c, a+b, a+c, b+c, a+b+c.  Now think what happens as your list of possible computers grows to 5 or 10 or 100!

Every time you find yourself typing Read-Host sop and ask yourself if its really needed. I’m not suggesting that cute little animals will die if you use it but you will cause yourself unnecessary work – now and in the future.

So what should you do?

Use a parameterised script.

 

[CmdletBinding()]
param (
[string[]]$computername
)

foreach ($computer in $computername){
Restart-Computer -ComputerName $computer
}

 

call the script reboot.ps1 for sake of arguments.

You can then use it like this

./reboot –computername computerA
./reboot –computername computerb

./reboot –computername computerA, computerb, computerc

 

if you have a lot of machines to reboot

./reboot –computername (get-content computers.txt)

put the names in a file (one per line) and call the script as above. You could even have different files for different groups of computers if required

Less typing and easier to use and maintain.

Now you’re using the power that PowerShell brings you.


June 22, 2014  8:23 AM

PowerShell.org – TechSession webinairs

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell

PowerShell.org are starting a series of technical webinairs. Details from

http://powershell.org/wp/techsession-webinars/


June 22, 2014  8:13 AM

Workflows: 4 Using cmdlets

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell, Workflow

This is a simple function to return some data about a system:

function get-serverdata {

Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service

Get-Process
}

get-serverdata

 

The function will return the CIM data about the operating system, then the service data and then the process data. The only difference to running the cmdlets interactively is that the display for services and processes defaults to a list rather than a table.

Status      : Stopped
Name        : WwanSvc
DisplayName : WWAN AutoConfig

Id      : 1524
Handles : 81
CPU     : 0.015625
Name    : armsvc

 

Now lets try that as a workflow:

 

workflow get-serverdata {

Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service

Get-Process
}

get-serverdata

 

You’ll see the data returned in exactly the same order – operating system, services and processses. The only difference is that a PSComputerName property is added to the output.

Status         : Stopped
Name           : WwanSvc
DisplayName    : WWAN AutoConfig
PSComputerName : localhost

Id             : 1524
Handles        : 81
CPU            : 0.015625
Name           : armsvc
PSComputerName : localhost

 

I didn’t emphasis this when discussing parallel and sequential processing earlier but the default action for a workflow is to process the commands sequentially. You use the parallel and sequence keywords to control that processing – for instance to run the cmdlets in parallel:

workflow get-serverdata {

parallel {
Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service

Get-Process
}

}

get-serverdata

 

From this I see a block of service data (in table format) followed by the first process, another service, the CIM data and more services then mixed services and process data until then end.

I can’t emphasise this point enough – when running workflow tasks in parallel you have no control of the order in which data is returned.

 

You may be tempted to try something like this:

workflow get-serverdata {

parallel {
Get-CimInstance -ClassName Win32_OperatingSystem

Get-Service | Format-Table

Get-Process | Format-Table
}

}

get-serverdata

 

Don’t. You see an error message:

At line:6 char:15
+ Get-Service | Format-Table
+               ~~~~~~~~~~~~
Cannot call the ‘Format-Table’ command. Other commands from this module have been packaged as workflow activities, but this command was specifically excluded. This is likely because the command requires an interactive Windows PowerShell session, or has behavior not suited for workflows. To run this command anyway, place it within an inline-script (InlineScript { Format-Table }) where it will be invoked in isolation.
+ CategoryInfo          : ParserError: (:) [], ParseException
+ FullyQualifiedErrorId : CommandActivityExcluded

 

There are a couple of very important pieces of information in this message:

“packaged as workflow activities”

The commands used in the workflows in this post ARE NOT CMDLETS. I’ve capitalised that to emphasis. Workflows do not run PowerShell cmdlets – they use workflow activities. These are cmdlets repackaged to run in workflows. Not all cmdlets have bee repackaged as activities and not all cmdlets can be which leads to the second important point from the error.

“To run this command anyway, place it within an inline-script (InlineScript { Format-Table }) where it will be invoked in isolation. “

 

Any cmdlet or command that hasn’t been specifically converted to a workflow activity can be wrapped in an inlinescript block if you need to run it in a workflow. The example would become:

workflow get-serverdata {

parallel {
Get-CimInstance -ClassName Win32_OperatingSystem

inlinescript {
Get-Service | Format-Table

Get-Process | Format-Table
}

}

}

get-serverdata

 

You can see a list of the cmdlets that have not been implemented as workflow activities in the this link http://technet.microsoft.com/en-us/library/jj574194.aspx

Scroll down to excluded cmdlets.

 

PowerShell workflows look like PowerShell but they are not PowerShell. Next time we’ll dig a bit further under the covers of PowerShell workflows.


June 21, 2014  10:03 AM

PowerShell Summit Europe 2014 – - reasons to attend 1..7

Richard Siddaway Richard Siddaway Profile: Richard Siddaway
PowerShell

With the opportunity to register for the first PowerShell Summit in Europe fast approaching you may be looking for reasons you can use to convince your boss to let you go.  These are just some of the reasons you should attend:

Excellent sessions  -  our sessions are deliberately kept short and focussed. There’s no time for the speaker to waffle and pad with only 45 minutes. You get for information fro your money this way. The speakers are acknowledged experts in their areas with years of experience using PowerShell. We emphasise code over slides you get to see PowerShell in use – not just a long list of PowerPoint slides

Meet some the world’s foremost PowerShell experts – the speakers remain available throughout the whole conference. Our sessions aim to be interactive so you can ask questions and even provoke a discussion.

Get your questions answered – the Summit will be attended by members of the PowerShell team and a significant number of PowerShell experts such as PowerShell MVPs. They will be more than happy to answer your questions. If you can’t get your PowerShell question answered at a PowerShell Summit there probably isn’t an answer.

Talk to people with similar problems to solve – the attendees are all PowerShell users – this is not a conference to attend if you have mastered at least the basics of PowerShell.  Many of them will have similar problems, and even have the solution. The PowerShell community loves to share. This is an opportunity talk to those people and find your answers.

Contribute to a community resource – become a member of the PowerShell Team for a night. We will be solving one of the PowerShell Teams wish list items in a special evening event. Work with other PowerShell experts to create a resource that you can use. Its amazing how many learning points come out of these activities.

Get additional AWPP benefits – the conference is one of your AWPP benefits – http://powershell.org/wp/association-for-windows-powershell-professionals/ – take advantage of the script review process to get additional problems solved

Become better at using PowerShell – learn new techniques that you can apply. The demo code from the speakers will be available so you can use and adapt to help solve your problems.

You won’t find a better 3 days of PowerShell related activity.  Hope to see you there


Forgot Password

No problem! Submit your e-mail address below. We'll send you an e-mail containing your password.

Your password has been sent to: