Azure Building Blocks – Like Legos™ for the cloud?

imageI thought this was an interesting announcement.  Yesterday Microsoft announced a new way to more-simply deploy Infrastructure-as-a-Service (IaaS) resources into Azure, along with a new command line tool to drive it with.  The tool is called Azure Building Blocks.  These building blocks are described as “a set of tools and Azure Resource Manager templates that are designed to simplify deployment of Azure resources.” 

“But isn’t that what Azure Resource Manager templates are for?”

Absolutely, but apparently some people find the creation and management of the full templates a little too complex; especially when they only need to spin up some resources with basic best-practice or common default parameters.  “I just want a Windows Server VM.. I don’t want to have to define the storage type or network or all that.  Just do it.”  And so you’ll define your machine (or other resources) in a much simpler template (parameter) file that will be used to deploy it.  It’s “A tool for deploying Azure infrastructure based on proven practices.

“Cool!  How do I get started?”

Legos are awesome!The Azure Building Blocks page is where you should begin.  The tool claims to run equally well on Windows or Linux (or Bash on Windows, or even in the Azure Cloud Shell).  Fundamentally it requires you to have the newest Azure CLI installed, as well as Node.js installed in your shell of choice.  Then you install the tool with this command:

npm install -g @mspnp/azure-building-blocks

Verify that the tool installed by running “azbb”, and you should see a typical command usage and options displayed.

azbb default options

Once installed, you can start with a very simple template example.  Samples for various scenarios can be found here:

I’m going to try what looks to be the most simple of all VM samples: vm-simple.json.

This is the contents of that file:

    "$schema": "",
    "contentVersion": "",
    "parameters": {
        "buildingBlocks": {
            "value": [
                    "type": "VirtualNetwork",
                    "settings": [
                            "name": "simple-vnet",
                            "addressPrefixes": [
                            "subnets": [
                                    "name": "default",
                                    "addressPrefix": ""
                    "type": "VirtualMachine",
                    "settings": {
                        "vmCount": 1,
                        "osType": "windows",
                        "namePrefix": "jb",
                        "adminPassword": "testPassw0rd!23",
                        "nics": [
                                "subnetName": "default",
                                "isPublic": true
                        "virtualNetwork": {
                            "name": "simple-vnet"

That’s all there is to the file!  You can see pretty easily that this just creates a simple Windows Server 2016 VM, along with a supporting virtual network and subnet.  Interestingly, and I think this helps underscore one of the values of Azure Building Blocks, if I had set the vmCount to something other than 1, it would have created that many AND put them in an availability set for me.  Automagically.

Now, having saved that vm-simple.json file to my disk, and after I login to Azure…

az login

…I can run this command…

azbb -g testRG -s xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -l westus -p .\vm-simple.json -deploy

…where the xxx-xxx’s are replaced by my Azure Subscription ID.  And since I ran the command while in the folder containing the vm-simple.json file, I only needed to use “.\” as the path to the file.

And after a few minutes, the Resource Group in the Azure portal looks like this:


You’re also given a couple of output files that contain the JSON which created the virtual network and the machine.  It was only after digging into one of these that I was able to see the default username used was “adminUser”.

For the full story, read the announcement here, and start learning and working through the tutorials here.

Azure Building Blocks

What do you think?  Let us know in the comments if you have any questions or rants. Smile

Update: Did Kevin pass Azure test 70-532?

Let me reveal my answer to you in the form of a couple of test questions:

1. Kevin found the test harder than he expected.

  • True
  • False

2. What areas did Kevin wish he had studied and understood more deeply during the test?

A.  Code.  Actual C# code against Azure web and identity services

B.  Azure Functions. I know the overview, but had to guess on some of the “fill in the blank” code snippets

C.  Programmatic use of messaging; again, you should know what the code looks like for this.

D.  All of the above.

3.  In spite of himself, Kevin passed.

  • True!
  • False

It’s not uncommon to go into a test and get a little flustered with questions that are just outside of the areas you studied, or perhaps went into depth or details you weren’t expecting.  It makes affirming that last “Are you sure (you want to exit the test and see you’re a miserable failure)?” question that much more difficult to click.  But I clicked it, and found that I had indeed passed.  So.. not so miserable! 

And there was much rejoicing...

“What’s next?  Are you going to re-do 70-533 again?”

Y’know, I actually think I might look into another track entirely.. like maybe the CISSP certification, or maybe some Amazon Web Services training and certs.  I know Azure really well, but only know the basics of AWS

So, how about you?  Are you on your own certification adventure?  Feel free to share with us in the comments!

DSC: Cut to the Core

PowerShell SuperheroThis is an interesting development.  I had a good friend and respected local technologist mention this to me the other day, and he wasn’t happy.  “Why would they take away features?  Just to be ‘consistent’?”  Apparently his take on this is that Microsoft is reducing something that was powerful down to a subset of its former usefulness. 

Here’s what he was referring to…

In a DSC future direction update on the PowerShell Team Blog the other day, Microsoft announced a new direction for DSC.  For those not familiar with DSC, it stands for Desired State Configuration.  According to Jeffrey Snover (the architect of PowerShell), PowerShell DSC was the whole reason why PowerShell was invented.  We want the ability to define and apply a configuration as a “desired state” to a machine (or machines), and have it applied consistently and, optionally, perpetually.  Write up some simple text, and “Make it so.”, with all the benefits of text (source control, among others). 

Initially, of course, PowerShell DSC was addressing the configuration of Windows-based servers, but it was no secret that, being built with standards in mind, it was built to support the ongoing configuration of Linux workloads as well.  In fact, this really caused two worlds: PowerShell DSC for Windows and PowerShell DSC for Linux, because both had their own unique set requirements, dependencies, supporting frameworks, and allowed commands.  Somewhat understandable, sure.  Feature parity?  Um, no.

So now Microsoft announces “DSC Core”.

“What is DSC Core?”

I’m glad you asked.   It is “a soon to be released version of DSC that aligns with PowerShell Core”

“PowerShell Core?  What’s that?”

PowerShell Core is the open-source cross-platform version of PowerShell that runs on Windows, Mac, and Linux.  It runs on top of .NET Core…

“.NET Core?  What the…”

Yeah.. okay.  .NET Core is “a general purpose development platform maintained by Microsoft and the .NET community”. 

“Oh, I get it.”

You do?  Okay.  Well, anyway… back to DSC Core.  DSC Core (built using PowerShell Core which is built upon .NET Core) now becomes a common, cross-platform version of PowerShell DSC.  

From the “Future Direction” blog post:

“Our goals with DSC Core are to minimize dependencies on other technologies, provided a single DSC for all platforms, and position it better for cloud scale configuration while maintaining compatibility with Windows PowerShell Desired State Configuration.”

So this subset (if we can call it that) will still be compatible with PowerShell, but it won’t have the large numbers of unique Windows dependencies bogging it down. 

“What about compatibility?  What about the CmdLets?  Will they be the same, or will I have to use different ones?  What about DSC Resources?  Will they have to be recreated?”

All of those and a few other questions (like what to do about Pull Servers) are addressed in the “Future Direction” blog post.

So, “Why would they take away features?  Just to be ‘consistent’?” 

What do you think?  Feel free to discuss/rant/pontificate in the comments section below.

And again, read the full article on the PowerShell Team Blog

New TechNet Radio Series: Tech Futures

“Empowering every organization and every individual on the planet to achieve more.”

It’s an ambitious statement and Microsoft’s vision for a more productive work force and everyday life experiences.  But how do we get there?

Today we’re kicking off a new series entitled “Tech Futures” which is devoted to the idea of exploring new possibilities in our quest to “achieve more” in our daily lives. From connected cows to smart fridges, 3D printed pizza and driverless cars — all while tackling cyber security and privacy issues as well as trying to figure out how to support this new tech frontier through infrastructure management and enterprise mobility — no topic is off-limits.

So join us in our journey and let us know what you think the future will look like and how we can challenge what we see today, so that we can create the technologies that will shape tomorrow together.


Learn more. Check out


If you’re interested in learning more about the products or solutions discussed in this episode, click on any of the below links for free, in-depth information:

Other Websites & Blogs:

 Follow the conversation @MS_ITPro
 Become a Fan @
 Follow Kevin @KevinRemde
 Follow Kevin’s “Full of I.T.” on Facebook

Subscribe to our podcast via iTunes, Stitcher, or RSS

How I automated a hands-on-lab infrastructure – The PowerShell Script and completing the build (Part 6)

In this final part of our series we walk through the PowerShell script that connects to Azure, creates the resource group, and then launches the creation of the lab environment.  We also walk through the final couple of manual steps required to complete the lab setup


How I automated a hands-on-lab infrastructure – Additional Automations (Part 5)

In the 6th part of our series we look at some of the remaining requirements for our lab environment, and how we automatically launched a PowerShell script on each of our virtual machines to perform the final machine-dependent tasks.

How I automated a hands-on-lab infrastructure – Copy down files (Part 4)

Our lab environment requires that several sample scripts and other utilities be available on several of the member servers.  But how do we get those files onto the C: drive?  In this video I’ll show how that was accomplished.