Azure Building Blocks – Like Legos™ for the cloud?

imageI thought this was an interesting announcement.  Yesterday Microsoft announced a new way to more-simply deploy Infrastructure-as-a-Service (IaaS) resources into Azure, along with a new command line tool to drive it with.  The tool is called Azure Building Blocks.  These building blocks are described as “a set of tools and Azure Resource Manager templates that are designed to simplify deployment of Azure resources.” 

“But isn’t that what Azure Resource Manager templates are for?”

Absolutely, but apparently some people find the creation and management of the full templates a little too complex; especially when they only need to spin up some resources with basic best-practice or common default parameters.  “I just want a Windows Server VM.. I don’t want to have to define the storage type or network or all that.  Just do it.”  And so you’ll define your machine (or other resources) in a much simpler template (parameter) file that will be used to deploy it.  It’s “A tool for deploying Azure infrastructure based on proven practices.

“Cool!  How do I get started?”

Legos are awesome!The Azure Building Blocks page is where you should begin.  The tool claims to run equally well on Windows or Linux (or Bash on Windows, or even in the Azure Cloud Shell).  Fundamentally it requires you to have the newest Azure CLI installed, as well as Node.js installed in your shell of choice.  Then you install the tool with this command:

npm install -g @mspnp/azure-building-blocks

Verify that the tool installed by running “azbb”, and you should see a typical command usage and options displayed.

azbb default options

Once installed, you can start with a very simple template example.  Samples for various scenarios can be found here: https://github.com/mspnp/template-building-blocks/tree/master/scenarios

I’m going to try what looks to be the most simple of all VM samples: vm-simple.json.

This is the contents of that file:

{
    "$schema": "https://raw.githubusercontent.com/mspnp/template-building-blocks/master/schemas/buildingBlocks.json",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "buildingBlocks": {
            "value": [
                {
                    "type": "VirtualNetwork",
                    "settings": [
                        {
                            "name": "simple-vnet",
                            "addressPrefixes": [
                                "10.0.0.0/16"
                            ],
                            "subnets": [
                                {
                                    "name": "default",
                                    "addressPrefix": "10.0.1.0/24"
                                }
                            ]
                        }
                    ]
                },
                {
                    "type": "VirtualMachine",
                    "settings": {
                        "vmCount": 1,
                        "osType": "windows",
                        "namePrefix": "jb",
                        "adminPassword": "testPassw0rd!23",
                        "nics": [
                            {
                                "subnetName": "default",
                                "isPublic": true
                            }
                        ],
                        "virtualNetwork": {
                            "name": "simple-vnet"
                        }
                    }
                }
            ]
        }
    }
}

That’s all there is to the file!  You can see pretty easily that this just creates a simple Windows Server 2016 VM, along with a supporting virtual network and subnet.  Interestingly, and I think this helps underscore one of the values of Azure Building Blocks, if I had set the vmCount to something other than 1, it would have created that many AND put them in an availability set for me.  Automagically.

Now, having saved that vm-simple.json file to my disk, and after I login to Azure…

az login

…I can run this command…

azbb -g testRG -s xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx -l westus -p .\vm-simple.json -deploy

…where the xxx-xxx’s are replaced by my Azure Subscription ID.  And since I ran the command while in the folder containing the vm-simple.json file, I only needed to use “.\” as the path to the file.

And after a few minutes, the Resource Group in the Azure portal looks like this:

image

You’re also given a couple of output files that contain the JSON which created the virtual network and the machine.  It was only after digging into one of these that I was able to see the default username used was “adminUser”.

For the full story, read the announcement here, and start learning and working through the tutorials here.

Azure Building Blocks

What do you think?  Let us know in the comments if you have any questions or rants. Smile

Azure has PowerShell (?!)

“Oh c’mon, Kevin.. Everybody knows you can use PowerShell against Azure…”

Yeah, sure.  But did you know that they’ve finally added the preview for using a PowerShell shell right from the browser?

“No! Do tell!”

For a while now you’ve had the ability to click what looks to be a command-prompt icon in the upper-right-hand corner of your Azure portal window.

Shell Icon

That opens up a terminal-like window at the bottom of the browser, and you’re in a BASH session.  There’s a drop-down at the top of that windows that suggested that you can choose between BASH and PowerShell, but PowerShell was “coming soon”.  Well, soon was this week.

Shell Choices

Setting it up is fairly straightforward.  When you select PowerShell as the chosen shell, you are given a notice about the fact that you’ll need a dedicated storage account associated with this capability.  This storage will be used to host your default cloud drive file share. 

Note: Other than in this file share, there is no persistence between terminal sessions.   More about this later.

Configuring Storage

As you see above, I didn’t have storage created for this, so after selecting my subscription, it created a storage account for me.  I didn’t select the Show advanced settings option, but if I had I would have been able to choose existing or create new resource group, storage account, and file share.  

When I was done, I had a default resource group created to host that storage account.

Resource Group Created

The shell windows displays the status of the configuration, which does take a minute or two…

Configuring the PowerShell terminal session for the first time

And when you’re done, you’ve got a shell of POWER!

All done configuring

Notice at the top that can also now select between BASH and PowerShell, you can reset the session, click to common help topics, or manage settings (which as of right now is just manipulating the text size and providing feedback to Microsoft).

“Cool!  So what can you do with it?”

I haven’t gotten that far.. but let’s try a couple of things to see what the environment looks like.  Let’s start with a simple Get-Service cmdlet.  It actually took about 5 seconds to respond, but when it did it came back with what I expected…

Running Get-Service

I have to assume that I can do Azure PowerShell commands, like listing the resource groups using Get-AzureRmResourceGroup

Typical Azure cmdlet

The capture above is truncated because I thought I shouldn’t give you a list that also contains my subscription ID and other groups.. but trust me that this worked as expected.

“You mentioned that you have a file share created in storage.  How do you get to that?”

You’ll notice that you start out in the Azure: drive.  From here you can navigate to and manage Azure resources:

Navigate to Azure resources

But if I want to get to the file system of the local machine, I can go $Home

File system of the machine running my terminal session

Notice that I cd $Home, which brings me to a profile folder for my current session.  Yes, it’s basically the default folders you’d see on a Windows Server 2016 machine (because, under the hood, that’s what it is!).  However (and this is important).. putting items in any of that file system other than the linked folder CloudDrive will not persist from one session to the next.  So, I cd .\CloudDrive\ and I’m now placed in the file share of my persistent storage.  Whatever I do there will be persisted for me.

As an exercise for you, try creating a simple text file (echo “Hello, world!” > hello.txt) into both the Documents folder of the server, and of the root of the ./CloudDrive folder.  Log out of Azure, and then back in and into your PowerShell terminal window, and see which file is still there when you get back.  (NOTE: maybe you’ll get lucky and get the same machine if you do it right away.  But if you wait 30 minutes for the VM to time-out, I bet the file in the Documents folder will be gone, but the CloudDrive file will still be there.)

Happy geek!

For more official information, check out the announcement from the Azure Blog, plus the full Overview of Azure Cloud Shell (Preview), and Features & tools for PowerShell in Azure Cloud Shell.

Have you tried this yet?  What do you think?  Shoot me your questions and/or comments below.

DSC: Cut to the Core

PowerShell SuperheroThis is an interesting development.  I had a good friend and respected local technologist mention this to me the other day, and he wasn’t happy.  “Why would they take away features?  Just to be ‘consistent’?”  Apparently his take on this is that Microsoft is reducing something that was powerful down to a subset of its former usefulness. 

Here’s what he was referring to…

In a DSC future direction update on the PowerShell Team Blog the other day, Microsoft announced a new direction for DSC.  For those not familiar with DSC, it stands for Desired State Configuration.  According to Jeffrey Snover (the architect of PowerShell), PowerShell DSC was the whole reason why PowerShell was invented.  We want the ability to define and apply a configuration as a “desired state” to a machine (or machines), and have it applied consistently and, optionally, perpetually.  Write up some simple text, and “Make it so.”, with all the benefits of text (source control, among others). 

Initially, of course, PowerShell DSC was addressing the configuration of Windows-based servers, but it was no secret that, being built with standards in mind, it was built to support the ongoing configuration of Linux workloads as well.  In fact, this really caused two worlds: PowerShell DSC for Windows and PowerShell DSC for Linux, because both had their own unique set requirements, dependencies, supporting frameworks, and allowed commands.  Somewhat understandable, sure.  Feature parity?  Um, no.

So now Microsoft announces “DSC Core”.

“What is DSC Core?”

I’m glad you asked.   It is “a soon to be released version of DSC that aligns with PowerShell Core”

“PowerShell Core?  What’s that?”

PowerShell Core is the open-source cross-platform version of PowerShell that runs on Windows, Mac, and Linux.  It runs on top of .NET Core…

“.NET Core?  What the…”

Yeah.. okay.  .NET Core is “a general purpose development platform maintained by Microsoft and the .NET community”. 

“Oh, I get it.”

You do?  Okay.  Well, anyway… back to DSC Core.  DSC Core (built using PowerShell Core which is built upon .NET Core) now becomes a common, cross-platform version of PowerShell DSC.  

From the “Future Direction” blog post:

“Our goals with DSC Core are to minimize dependencies on other technologies, provided a single DSC for all platforms, and position it better for cloud scale configuration while maintaining compatibility with Windows PowerShell Desired State Configuration.”

So this subset (if we can call it that) will still be compatible with PowerShell, but it won’t have the large numbers of unique Windows dependencies bogging it down. 

“What about compatibility?  What about the CmdLets?  Will they be the same, or will I have to use different ones?  What about DSC Resources?  Will they have to be recreated?”

All of those and a few other questions (like what to do about Pull Servers) are addressed in the “Future Direction” blog post.

So, “Why would they take away features?  Just to be ‘consistent’?” 

What do you think?  Feel free to discuss/rant/pontificate in the comments section below.

And again, read the full article on the PowerShell Team Blog

How I automated a hands-on-lab infrastructure – The PowerShell Script and completing the build (Part 6)

In this final part of our series we walk through the PowerShell script that connects to Azure, creates the resource group, and then launches the creation of the lab environment.  We also walk through the final couple of manual steps required to complete the lab setup

https://channel9.msdn.com/Blogs/FullofIT/ARMPart6/player

 

How I automated a hands-on-lab infrastructure – Additional Automations (Part 5)

In the 6th part of our series we look at some of the remaining requirements for our lab environment, and how we automatically launched a PowerShell script on each of our virtual machines to perform the final machine-dependent tasks.

https://channel9.msdn.com/Blogs/FullofIT/ARMPart5/player

How I automated a hands-on-lab infrastructure – Copy down files (Part 4)

Our lab environment requires that several sample scripts and other utilities be available on several of the member servers.  But how do we get those files onto the C: drive?  In this video I’ll show how that was accomplished.

https://channel9.msdn.com/Blogs/FullofIT/ARMPart4/player

 

How I automated a hands-on-lab infrastructure – Domain-Join the member servers (Part 3)

In the fourth part of our series we look at how to automatically join the three other virtual machines to the contoso.com domain, again using an custom extension in our Azure Resource Manager template.

https://channel9.msdn.com/Blogs/FullofIT/ARMPart3/player