Do you, Release Management, take this feature, Deployment Slots, to be your DevOps partner?

Abstract When I first started reading about Deployment Slots I had more questions than answers. My most obvious concern was what was swapped and what was not.  The original documentation made statements like the following: “A slot that you intend to swap into production needs to be configured exactly as you want to have it in production.” This was a nonstarter for me.  There is no way I intend to have my Dev, QA or even my Staging code pointing at a production database.  I am sure we all agree that pointing Dev or QA at a production database is foolish.  But some might argue that Staging is production code just waiting to be swapped into production.  But I would argue that there are times where this is not true.  If the changes in stage require “breaking” database schema changes things fall apart quickly. However, some companies address this issue by requiring all releases never break the previous release.  That way if you need to quickly swap back to the previous version your application will be able to run. Nevertheless, I prefer Stage to point at a different database than Production until the time I want to swap them. When I preform the swap I don’t want my connection strings from Stage to follow me into Production. I need the Production connection strings to stick with the Production slot as I update the Virtual IP (VIP) address.  A featured introduced in 0.8.10 of the Azure PowerShell Tools actually allows just that. The SlotStickyConnectionStringNames switch enables connection strings not to be moved during swap operations. With this new feature in place I decided to see if I can combine Web Deployment Slots with Release Management.  I will use Visual Studio Online (VSO) and Release Management Online (RMO) to manage the movement of the code from each slot with database changes being promoted with SSDT via Proxy Servers.  A Proxy Server is a machine that sits between RMO and the target compute instance (PaaS Website, Cloud Service, Linux Machine, etc.). I will deploy to the first slot and simply swap my way into production. Outline [  ] Add PowerShell to Project [  ] Configure build to create package [  ] Create Azure Website with Dev and QA slots [  ] Create 3 Azure SQL Databases [  ] Create 3 Azure IaaS Proxy VMs (install SSDT, Web Deploy, Azure SDK) [  ] Add Azure Subscription to RM [  ] Configure Environments in RM [  ] Create Release Path in RM [  ] Create Components in RM [  ] Create Release Template in RM [  ] Trigger Build Application The application I am deploying is my People Tracker application I first introduced at TechEd North America 2014.  It is a simply ASP.NET MVC application backed by SQL Server.  I am using Entity Framework database first with SSDT to deploy.  A key point to be aware of is you need to make sure your SSDT Database Project is configured to target Microsoft Azure SQL Database. My goal is to setup an Azure Website with two deployment slots Dev and QA.  Each slot of the website will be backed by an Azure SQL Database (Production, Dev & QA). In the MVC project I create a configurations folder to hold the PowerShell scripts I need to deploy the database, deploy the website and swap the slots. It is very important that you set the Copy to Output of each of these files to Copy always.  This will ensure they are copied to the drop location of your build. Below you will find the three required files to deploy using Deployment Slots. Make sure you use -Verbose to produce command output. All the configuration variables we add to Release Management and several system variables will be available to our PowerShell scripts. The $AzureWebsiteName is a global configuration variable I will set under the Administration tab.  The $Slot is the component level configuration variable.  Finally $applicationPath and $PackageName are system variables automatically passed in by Release Management.  The Publish-AzureWebsiteProject cmdlet copies the files from the drop location to the provided slot.  This script will only be used during the first stage of our release path. Publish-AzureWebsiteProject -Name $AzureWebsiteName -Slot $Slot -Package "$applicationPath\_PublishedWebsites\$($ProjectName)_Package\$ProjectName.zip" -Verbose To move the code the rest of the way we are simply going to swap it from slot to slot. Make sure you don’t forget the -Force.  If you do it will fail because it will display a dialog asking for confirmation. Switch-AzureWebsiteSlot -Name $AzureWebsiteName -Slot1 $From -Slot2 $To -Force -Verbose Finally we have the PowerShell script that calls SqlPackage.exe to deploy our SSDT dacpac. & 'C:\Program Files (x86)\Microsoft SQL Server\120\DAC\bin\SqlPackage.exe' /Action:Publish /SourceFile:$applicationPath\$FileName /TargetConnectionString:"Data Source=$SqlServer;User ID=$UserID;Password=$Password;Initial Catalog=$DatabaseName" Now check in the code to VSO. With the solution ready we can turn our attention to the build. Build Thanks to the integration of VSO and RMO you can use the out of the box build process template to trigger a release from a build.  The only thing we need to do is add arguments to pass to MSBuild so it creates a package for our web project.  Simply set the value of MSBuild Arguments to ‘/p:DeployOnBuild=True’. With the solution and build configured we need to create the environments in Azure to deploy to. Resources Below is the contents of the resource group I created to put this demo together.   The new Resource Group feature of the Preview Portal makes it very easy to see all the related resources. As you can see it is more than you might first expect.  This demo requires a Website with two deployment slots, three IaaS Virtual Machines, three Azure SQL Databases, a virtual network and a storage account. Your first question might be why if we are using PaaS Websites do I have three IaaS virtual machines?  When working with PaaS you need a machine from which you can execute your PowerShell because you cannot connect to the computing instance for your Website.  Because Release Management currently has a restriction where a server used in an Azure deployment can only appear in one environment, we are required to create an IaaS virtual machine to act as a Proxy Server for each slot.  I have been assured a better solution is coming.  Another alternative is to forego RMO and use an On Premises installation of Release Management and use a single Agent based proxy for all the stages. Azure SQL Databases I had to select a Service Tier of at least Standard to get my deploys to work.  Basic would always timeout. If you intend to connect to your Azure SQL Databases from your development machine be sure and add your machine IP Address to the databases firewall. Otherwise you will never be able to connect. Azure Websites Deployment slots are only available in the Standard Web Hosting Plan Mode so make sure you select Standard or you will have to upgrade them. After you have created the Dev and QA slots make sure you set the desired connection strings for each slot pointing to the correct Azure SQL Database.  Because I am using Entity Framework Database first (same would be true for Model first) when setting the connections strings for the slots, I had to select Custom as the connection string type. Now we have come to the most important point of this post. Now I have to use the Set-AzureWebsite cmdlet with the Slot Sticky Connection String Names switch to make sure the connection strings stick to the slot during a swap. Set-AzureWebsite -Name mysite -SlotStickyConnectionStringNames @("DemoDBEntities") You only have to run this command on the production slot. The other slots will share the sticky connection string names settings. Proxy Servers We are now to the oddest part of this demo.  We cannot connect directly to a PaaS website to install an agent or target it with DSC.  So we have to use another machine to execute our PowerShell script using the cmdlets from the Azure SDK to deploy our websites and swap our slots.  These machines are called Proxy Servers. They sit between RM and the Websites and provide a place for us to execute our scripts. When I create virtual machines in Azure I prefer to create the Cloud Service and Storage Account that is going to hold it first so that I can control the names. If you simply use the Virtual Machine wizard it will create a Cloud Service and Storage Account with crazy names.  You can store all three Virtual Machines in the same Cloud Service and Storage Account. Proxy servers do not have to be powerful machines.  I created the smallest Windows Server 2012 R2 machine I could.  You need to create one for each slot you intend to deploy too.  Once the servers are provisioned and running do yourself a favor and disable IE Enhanced Security Configuration. Now we need to install the following components on the machine:   Microsoft Azure PowerShell with Microsoft Azure SDK Microsoft SQL Server Data-Tier Application Framework (DACFx) (June 2014) Install Web Deploy   Once the platform installer starts you can click the back arrow. Search for all the components and install them all at once instead of one at a time. Once the components are installed we need to configure the Azure PowerShell tools to connect them to your Azure Subscription.  To begin run the Add-AzureAccount cmdlet.  If you have more than one subscription connected to the account you will need to set the default subscription.  You can see how to do that here. Deployment Below is the flow of the code through the Deployment Slots: At time 0 when you have simply created the PaaS Website with a Dev and QA slot and three empty Azure SQL Databases.  All three slots show the default page. Once the release deploys to the first stage using the deployDb.ps1 and deployWebSite.ps1 files the environments will look like this. Now that the code has been copied to the Dev slot all we have to do is update the QA database and swap the Dev and QA slots. At this point the QA slot will be the only slot that can be accessed.  Moving to Production is very similar to moving to QA.  Simply update the database in Production and swap the QA and Production slots. I think the images really drive home the fact that it is indeed a swap and not a copy from one environment to another.  The next three images show the movement of V2 through the stages. The V2 version of the site has a column for Middle Name. If we elect to enforce the requirement that each release must be backwards compatible with the current version both the Production and QA slots would be accessible at this point. Release Management We now have to connect Release Management to your Azure Account.  From the Administration tab click on “Manage Azure”.  Click the New button to open the “New Azure Subscription” page.  Use any name you like.  You can locate your “Subscription ID” from the Azure Management Portal.  From the Azure Management Portal select “Settings”. The settings page list all your Subscriptions with their Subscription ID’s.  Just copy and paste the GUID into Release Management. Next you will need to get your “Management Certificate Key” from https://manage.windowsazure.com/publishsettings. Just save the file to somewhere safe and open it with a text editor. Copy the ManagementCertificate value without the quotes and paste into Release Management.  Finally you need to provide a Storage Account Release Management will use to move files to Azure.  You can use the same Storage Account we created to hold our virtual machine. With your Azure account configured in Release Management we can now create our Environment.  Click the “New vNext Azure” button on the Environments tab under “Configure Paths”. Then click “Link Azure Environment”. Select your subscription and select the storage account we created and click Link. Now click "Link Azure Servers", select our virtual machine and click Link. You can now Save & Close the Environment.  You need to repeat this for all three proxy servers. Next we have to create the “vNext Release Path” using our new environments. Using the environments we just created add as many stages as you need (one for each slot). The bits of our build will be copied there and the PowerShell we be executed from these machines. The PowerShell will publish our application to the Azure Website, deploy our database changes and swap our deployment slots. Next we need to define a component. I decided to create a component for each PowerShell I am going to run (DB, Swap and Website).  From the “Configure Apps” tab select “Components”.  Click “New vNext” and give it a name and set the “Path to package” to “\”.  You will also need to create some Configuration Variables. There is one configuration variable that I need in several scripts so I decided to define it at the global level under Administration Settings. The final step is creating a “vNext Release Template”. Click “New” from the “vNext Release Templates” page.  Give it a name, select the release path we just created and set the build definition to the correct build then click Create.  Right click on Components in the Toolbox and click Add.  Link the components to the Release Template.  Now drag the “Deploy Using PS/DSC” to the Deployment Sequence. Select the server name from the dropdown.  Enter the username you selected when you created the virtual machine in .\UserName format.  Enter the password, select the component, set PSScriptPath to “Configuration\deployDb.ps1” and set SkipCaCheck to “True”.  Add a Custom configuration variable for DatabaseName and set it to the Dev database name. Now add another “Deploy Using PS/DSC” to the Deployment Sequence. Enter the username you selected when you created the virtual machine in .\UserName format.  Enter the password, select the component, set PSScriptPath to “Configuration\deployWebSite.ps1” and set SkipCaCheck to “True”.  Add a Custom configuration variable for Slot and set it to the Dev slot name. The QA and Production stages are very similar except instead of using the deployWebSite.ps1 script in the second action you are going to use the swapSlots.ps1 file and add a Slot1 and Slot2 configuration variables. QA Stage Production Stage Now simply trigger a build and your release will begin. deployDb.ps1 (246.00 bytes) deployWebSite.ps1 (163.00 bytes) swapSlots.ps1 (174.00 bytes)

My DSC works from PowerShell ISE but not from Release Management

If you are like me you learned Desired State Configuration (DSC) before trying to incorporate it with Release Management.  I recommend this because you get a pure understanding of DSC and its full capabilities. The only caveat to learning DSC this way is certain things are different when used with Release Management.  One of which came up today, which is the way we identify the nodes.  Using pure DSC you have many choices on how you identify the nodes to target.  You can use Configuration Data, hardcode the values or even parameterize your DSC script.  However, I have had the most success when I use $env:COMPUTERNAME.  You could also use 'localhost'. When you learn DSC outside of Release Management you use a developer workstation to create the MOFs and push them from there to the target nodes.  However, when Release Management runs a DSC script it is first copied to the target machine.  Then the script is executed to create the MOF file to be pushed to the node.  Because the script is being copied to the target the use of $env:COMPUTERNAME eliminates the need to hardcode any values in your script and reduces the chance for error.

Is DSC an upgrade to Agent-based pipelines?

There have been a lot of questions from customers and on internal distribution list lately on Release Management, Agent-based vs vNext and how best to deploy to PaaS.  So instead of answering the same questions over and over again I decided to write this post and just point them here. There are a couple of points I want to clarify.  First nothing stops you from running a DSC script via the Microsoft Deployment Agent.  The agent can run any PowerShell and DSC is just a PowerShell extension and can be executed via the Microsoft Deployment Agent.  Second DSC is not an “Agentless” solution. From a Release Management perspective some people describe Desired State Configuration (DSC) as an agentless deployment. That is not a true statement.  The LCM or Local Configuration Manager running on the machines is the agent.  The nice thing about this is if you are targeting Windows Server 2012 R2 or Windows 8.1 the LCM is already installed and ready to go. But don’t kid yourself: it is an agent.  If you are targeting older versions of Windows you have to install Windows Management Framework 4.0 before you can use DSC. Therefore, the experience of setting up an agent based or vNext (I prefer calling these DSC pipelines and will for the rest of this post) based pipeline both require the installation of an “agent” on the target machine. Many users of Release Management see DSC as an “Upgrade” or replacement for the agent based solution.  I could not disagree more.  There are situations that DSC simply does not do well and others it is great for.  If you really look at DSC from the Get, Set, Test perspective it limits its use.  A resource that is hard coded to return false from its Test method has no business being a resource.  Therefore, running test via DSC makes no sense.  As with DSC the same can be said for the agent based solution.   There are some things it does great and others where it does not. Many people are running to DSC because it is new and shiny but it is not a panacea.  Don’t get me wrong I am a big fan of DSC and can’t be more excited about getting it running on Linux but it is simply a tool in my tool box. I don’t see the DevOps world as a one or the other situation. DevOps is about People, Process and Products and getting them to work and communicate better while automating your pipeline with whatever makes sense for your desired result.  If it is DSC, great.  If it is PowerShell, Chef, Docker or Puppet, fine. Or maybe it is a combination of all of the above. The goal is a way to easily track, manage and automate the promotion of our code from one environment to another. The agent based solution is alive and well.  The goal of deploying to PaaS for example can be achieved today using an Agent based solution that scales much better than the DSC alternative. Let me explain why.  In a previous blog post I describe a technique of using a DSC pipeline to deploy to a PaaS website.  In that post I simply deploy to a single stage using an IaaS VM as a proxy to execute my DSC.  Release Management today does not allow you to have the same server in multiple environments for a DSC pipeline.  This means for each stage of my pipeline I would have to stand up a proxy.  However, compare this to the Agent based pipeline where the same machine can appear in multiple environments. This allows you to reuse a single proxy machine to target all your stages. I don’t feel DSC is the answer to all our problems.  I feel very confident that it is not.  We are not in a DSC or bust situation. Solve your problem with the best tools you have which might not necessarily be the newest tool you have.

Trigger a vNext Release from team build

Using Release Management you can implement true Continuous Delivery.  With the latest update of Release Management you can trigger a release via a REST api call to Release Management.  In this post I will share the script and build definition I used to trigger the release and how to use them in your team build. Although team build now allows us to run PowerShell scripts as part of our build we are still forced to create a custom build template.  The reason being the two opportunities provided by the default template are at the wrong points during the build.  We can run a PowerShell either before or after the build.  However, we need to run a PowerShell after the drop of the files. So the attached build template adds the ability to run a PowerShell after the files are copied to the drop location. To use the ps1 and build template in your build they must be stored in TFS. So pick a location and check the files into TFS.  The files do not have to be stored together. We will use these server paths when we configure our build. To create a new build definition using the build template attached just create a build like normal.  Once on the Process tab select the Show details button and click the New… button.  Use the Browse dialog to locate the new build template in TFS and click OK.  With the Trackyon.1.0.xaml selected you will have additional features most importantly for this post the “Post-drop script arguments” and “Post-drop script path”.  The value for “Post-drop script arguments” are Release Management Server name, Release Management Server Port, Team Project and Target Stage each separated by a space.  For example in the image below my Release Management Server name is “DemoDC” running on port 1000, my team project is “Scrum” and I want to target the “QA” stage of my release path.  The value for “Post-drop script path” is the server path to the trigger.ps1 file.   Now all you have to do is start your build to have it trigger a release in Release Management Before I end this post I would like to touch on some challenges I faced with trying to use the REST api for Release Management. One issue that was very difficult to troubleshoot was the requirement that the URL of TFS configured in Release Management on the Manage TFS tab must match exactly to the URL passed in by the script.  However, the value stored in $env:TF_BUILD_COLLECTIONURI used in the script is read from the Server URL in the Team Foundation Server Administration Console.  If those values do not match the release will fail.  You can use the Change URLs link in the Team Foundation Server Administration Console to change the Server URL so it matches the one in Release Management. Just as a tip make sure you have Fiddler installed because the error messages from the REST api or not very informative.  I had to use Fiddler a couple of times to troubleshoot issues. Trigger.ps1 (2.78 kb) Trackyon.1.0.xaml (48.84 kb) (right click and Save Target As)