When PowerShell v2 shipped with the ISE it was seen as a great step forward. We now had a decent editor for creating PowerShell code and running that code. You could also invoke the debugger. Some extensions to ISE have occurred, most notably Show-Command, but its essentially the same editor as in PowerShell v2
Visual Studio Code – now at version 1.11.2 – offers an interesting alternative. It manages a host of other languages as well as PowerShell. I currently have the extensions for Docker, Markdown, SQL, PowerShell, JSON and XML loaded. Many others are available as open source projects.
You can also open a terminal window which can be a command prompt, PowerShell or WSL bash – you could have all 3 open simultaneously if required.
The other big plus is that VS Code is cross platform so I can use the same editor on Windows and Linux. A big plus in these days of heterogeneous environments.
I’m going to try using VS Code instead of ISE for a while to see if it suits the way I work. If so it’ll become my default editor
Way back when we were known as Administrators. Then the term IT Professional (often irritatingly shortened to IT Pro) appeared. We’re doing the same job but have a fancy new title.
Are System Administrators really professionals in the true sense of the word.
I would argue no.
– We don’t have universally recognised certification/qualification requirement
– We don’t have a professional body
– We don’t have an continuous learning requirement to maintain the title
– We don’t have a recognised body of knowledge that accurately defines how and what we should do
You may argue with these points and say for instance that you do keep learning – congratulations – you’re in the minority that does.
We have partial answers to my points for instance vendor certifications and best practice documentation but at the moment its all very piecemeal.
If IT wants to be treated as a profession its practitioners have to behave as professionals and at the moment I don’t think that happens in the vast majority of cases. There are exceptions and hopefully over time that behaviour will become the norm. Until then I’m going to stick with Administrators.
DevOps is the latest “big thing” in IT. Whether it will make a difference or be dropped as everyone rushes to embrace the next “big thing” only time will tell.
For now, there’s a free ebook from the DevOps Collective (the people who bring you the PowerShell Summit) that looks at DevOps from the OPs perspective:
A new set of repositories on Github document a process for sharing end-to-end scenario based DSC configurations
These are open to community involvement
I’m going to be creating, using and discarding a number of VHDs for my diskpart and PowerShell series. When I have a number of them mounted I want a quick way to dismount them. Assuming I consistently keep them in the same folder then this very nicely does the job
Get-ChildItem -Path C:\test\ -Filter *.vhdx | Dismount-VHD
Why does it work?
Because Get-ChildItem emits System.IO.FileInfo objects that have a Path property and Dismount-VHD accepts pipeline input for the Path of the VHD to dismount:
Specifies one or more virtual hard disk files for which the corresponding virtual hard disks are to be dismounted.
Default value none
Accept pipeline input? true (ByValue, ByPropertyName)
Accept wildcard characters? false
Last time we created a virtual disk and mounted it. In this post we’ll initialize the disk and create a volume.
Start by remounting the disk
Get-VHD -Path C:\test\Test1.vhdx | Mount-VHD
You can now initialize the disk:
Initialize-Disk -Number 1
Create a partition:
New-Partition -DiskNumber 1 -DriveLetter F –UseMaximumSize
Ignore the message about formatting as you want to control that:
Format-Volume -DriveLetter F -FileSystem NTFS -Confirm:$false –Force
Your new disk is ready to use.
The diskpart equivalents can be found here: https://technet.microsoft.com/en-us/library/cc766465(v=ws.10).aspx
You can perform the creation and formatting of the disk in one pass:
New-VHD -Path C:\test\Test2.vhdx -Dynamic -SizeBytes 10GB |
Mount-VHD -Passthru |
Initialize-Disk -PassThru |
New-Partition -DriveLetter G -UseMaximumSize |
Format-Volume -FileSystem NTFS -Confirm:$false –Force
Parameterize the path, size and drive letter and you have a handy function to set up disks
Before we start digging into the diskpart/Storage module functionality we need a disk to practice on. I don’t recommend using your machine’s system disk – bad things will happen.
The Hyper-V module has a New-VHD cmdlet so lets use that to create a disk to play with. The great thing about virtual disks is that you can delete then if everything goes horribly wrong.
There is a New-VirtualDisk cmdlet in the storage module but that works with storage pools. We’ll cover that later in the series.
Lets create a virtual disk:
New-VHD -Path C:\test\Test1.vhdx -Dynamic -SizeBytes 10GB
You can access the virtual disk using the path
Get-VHD -Path C:\test\Test1.vhdx
You need to mount the virtual disk before you can work with it
Get-VHD -Path C:\test\Test1.vhdx | Mount-VHD
Once mounted you can use get-Disk to identify the virtual disk
PS> Get-Disk | select Number, FriendlyName, PartitionStyle
Number FriendlyName PartitionStyle ------ ------------ -------------- 1 Msft Virtual Disk RAW 0 Samsung SSD 840 PRO Series MBR
The clue is in the friendly name and that the partition style is RAW.
Next time we’ll look at formatting and partitioning the new drive. For now we’ll just dismount the virtual disk
Dismount-VHD -DiskNumber 1
An attendee at the Summit made the statement that the DiskPart utility didn’t have any equivalent in PowerShell. That’s not strictly true as the storage module provides a lot of functionality that maps to diskpart functionality.
The module contents include:
PS> Get-Command -Module Storage | select name
In this mini series I’m going to go through a number of the diskpart options and show you how to do the same with the Storage module cmdlets.
I’m not sure if all diskpart options are available but it’ll be fun to find out.
The Storage module was introduced with Windows 8/Server 2012.
if you prefer shows a number of cdxml files. This means that the cmdlets are based on CIM classes which you can see at
Get-CimClass -Namespace ROOT/Microsoft/Windows/Storage
These classes aren’t available on versions of Windows prior to Windows 8/Server 2012.
I’ll also have a look at some of these classes to see if there’s anything we can do that isn’t covered directly by the storage module cmdlets.
2017 saw our largest Summit to date. 250 PowerShell fanatics (I use the word advisedly) descended on Bellevue Washington. The conversations had already started when I arrived at the hotel on the Friday night before the Summit!
We had a planning meeting for Summit 2018 on the Saturday. Some good ideas came from the meeting that we’ll share in a little while. If you thought this year was good next year will amaze you.
The Summit opened on Sunday with 3 hour Deep Dives morning and afternoon. During the morning I discovered that the issues Delta airlines were having was causing problems for speakers travelling to the Summit. In the end we only had one speaker unable to reach the Summit. We have a plan to mitigate any future issues with speaker drop outs that we’ll be implementing for Summit 2018 and later (yes we do plan more than a year in advance).
Monday saw the PowerShell team presenting on PowerShell now and in the future with team members showing what they’re working on now!
Tuesday and Wednesday morning saw some amazing standard length sessions and the first outing for the Three Furies of PowerShell – who knows they may appear again sometime.
Wednesday afternoon saw the Community Lightning Demos – everything I’ve heard says it was amazing – I was moderating 2 panel discussions. We also had some longer technical sessions.
I saw a large number of people I recognised from previous Summits. I asked why they came back and consistently received two reasons:
– the high level of the technical content
– the ability to talk to speakers, MVPs, team members and other attendees about their PowerShell problems and get answers to those problems
Be assured we’ve taken those 2 things on board and are committed to preserving those aspects of the Summit.
A huge thanks to the speakers, the PowerShell team and the attendees for making a fantastic Summit. A little while to reflect and catch my breath and then its time to dive into the work for next year
Why do some technologies become widely adopted and others are seemingly abandoned – often without any real testing. What do I mean by abandoned technologies? Things like Server Core for instance. And I suspect that nano server and even containers on windows will follow and become abandoned technologies.
Server Core first appeared in Windows Server 2008! In nearly 10 years of existence how many organisations are utilising Server core to its full potential. Very few in my experience. I suspect many, if not most organisations, don’t use it at all.
Nano server was introduced with Server 2016. Its totally headless and very small footprint. You can pack 100s of them onto a 64GB host. Nano server supports a limited number of roles but if you need a small footprint server to host a web site, host VMs or containers or act as a file server for instance its ideal.
The last thing I suspect may join my list of abandoned technologies is Windows Containers. Again, introduced with Server 2016 containers offer a lightweight route to running your applications. With the ability to easily move containers between machines deployments from development to testing and production become much simpler.
So, why do I think these are abandoned technologies or will become abandoned technologies.
The reason is that the majority of windows administrators don’t want to adopt these technologies. They either actively block them or passively ignore them.
Why does this happen? Look at the three technologies again – none of them have a GUI interface! Until Windows administrators fully embrace remote, automated administration techniques they will remain abandoned technologies.
The day of administrators who can’t, or won’t, automate is ending – slowly but surely the pressures to move to a more automated environment are growing. Maybe it’’ happen soon enough that server core, nano server and windows containers will stop being abandoned technologies.