Is DSC an upgrade to Agent-based pipelines?

There have been a lot of questions from customers and on internal distribution list lately on Release Management, Agent-based vs vNext and how best to deploy to PaaS.  So instead of answering the same questions over and over again I decided to write this post and just point them here. There are a couple of points I want to clarify.  First nothing stops you from running a DSC script via the Microsoft Deployment Agent.  The agent can run any PowerShell and DSC is just a PowerShell extension and can be executed via the Microsoft Deployment Agent.  Second DSC is not an “Agentless” solution. From a Release Management perspective some people describe Desired State Configuration (DSC) as an agentless deployment. That is not a true statement.  The LCM or Local Configuration Manager running on the machines is the agent.  The nice thing about this is if you are targeting Windows Server 2012 R2 or Windows 8.1 the LCM is already installed and ready to go. But don’t kid yourself: it is an agent.  If you are targeting older versions of Windows you have to install Windows Management Framework 4.0 before you can use DSC. Therefore, the experience of setting up an agent based or vNext (I prefer calling these DSC pipelines and will for the rest of this post) based pipeline both require the installation of an “agent” on the target machine. Many users of Release Management see DSC as an “Upgrade” or replacement for the agent based solution.  I could not disagree more.  There are situations that DSC simply does not do well and others it is great for.  If you really look at DSC from the Get, Set, Test perspective it limits its use.  A resource that is hard coded to return false from its Test method has no business being a resource.  Therefore, running test via DSC makes no sense.  As with DSC the same can be said for the agent based solution.   There are some things it does great and others where it does not. Many people are running to DSC because it is new and shiny but it is not a panacea.  Don’t get me wrong I am a big fan of DSC and can’t be more excited about getting it running on Linux but it is simply a tool in my tool box. I don’t see the DevOps world as a one or the other situation. DevOps is about People, Process and Products and getting them to work and communicate better while automating your pipeline with whatever makes sense for your desired result.  If it is DSC, great.  If it is PowerShell, Chef, Docker or Puppet, fine. Or maybe it is a combination of all of the above. The goal is a way to easily track, manage and automate the promotion of our code from one environment to another. The agent based solution is alive and well.  The goal of deploying to PaaS for example can be achieved today using an Agent based solution that scales much better than the DSC alternative. Let me explain why.  In a previous blog post I describe a technique of using a DSC pipeline to deploy to a PaaS website.  In that post I simply deploy to a single stage using an IaaS VM as a proxy to execute my DSC.  Release Management today does not allow you to have the same server in multiple environments for a DSC pipeline.  This means for each stage of my pipeline I would have to stand up a proxy.  However, compare this to the Agent based pipeline where the same machine can appear in multiple environments. This allows you to reuse a single proxy machine to target all your stages. I don’t feel DSC is the answer to all our problems.  I feel very confident that it is not.  We are not in a DSC or bust situation. Solve your problem with the best tools you have which might not necessarily be the newest tool you have.

So sick of Microsoft.WebApplication.targets was not found build errors!

Problem: I was recently connecting an on premises build server to my Visual Studio Online account (that is crazy easy by the way) but my first build failed with the following error. The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\WebApplications\Microsoft.WebApplication.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk. Solution:  This is not the first time I have come across this error. Just replace “v11.0” with whatever version you want and you have been there.  In the past I would just copy files on my build machine or install countless SDK versions trying to make the build machine happy. Not this time I wanted a much clearer solution. I connected to the build machine and found the desired file in a “v12.0” folder instead of a “v11.0” folder being referenced.   So how can I simply have build use the correct version? Turns out you can simply pass Visual Studio Version on the Process tab of your build definition.  Under the Advanced section just add the following text to the MSBuild Arguments. /p:VisualStudioVersion=12.0 Problem solved and I don’t feel all dirty after. 

Achieving Continuous Delivery with VSO and RMO (it is easier than you think)

Release Management Online (RMO) monitors the build definition configured to a release template. When the build is completed a release is automatically kicked off. This is a great step forward from the past where we had to use custom build templates or resort the CLI or REST API to trigger a build.  This works regardless if you are using a hosted or on premises build controller. It has never been easier to achieve continuous delivery than it is today with the combination of Visual Studio Online (VSO) and RMO.  Simply check a project into VSO and add a new build definition.  Now in Release Management create a new Release Template associated with that build and check the box "Can Trigger a Release from a Build?" That is it! The next build will start this release. One thing I noticed was missing and the Product Team appears aware of is the inability to set a target stage.  As it sits today the target stage will always be the final stage of your Release Path.  That is a relatively small compromise for how easy they have made it to trigger a release from a build in VSO.

How to data bind a Visual Studio 2013 Coded UI Test

Problem I need to run the same Coded UI Test with different data. Solution Data bind your Coded UI Test.  To data bind a test in Visual Studio you just need access to the data source and add attributes to the test. For this example we are going to use a simply CSV file.  So add a new text file to your project with a CSV extension. Create a comma delimited file of the desired data.  Make sure when you save it you first select “Advanced Save Options” from the File menu and select “Unicode (UTF-8 without signature) – Codepage 65001” before you save the file. Now right click on the item in Solution Explorer and select Properties. From the Properties window change the “Copy to Output Directory” to “Copy always”.   To your test you will need to add two attributes.  The first is the DeploymentItem attribute. This attribute takes a single string argument of the name of the CSV file. The second attribute is the DataSource attribute.  This attribute is where you define the class used to read the data, what table to read from and how the data should be accessed.  For a CSV file the first argument will be “Microsoft.VisualStudio.TestTools.DataSource.CSV” which identifies the correct class to use to load the CSV file.  Next we need to let the test know where to find the data with “|DataDirectory|\\data.csv”.  Then we have to identify the table to read the data from with “data#csv”. Finally we have to give it an access method “DataAccessMethod.Sequential”.  The final attribute will look like this: [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "|DataDirectory|\\data.csv", "data#csv", DataAccessMethod.Sequential)] With the attributes in place you can now use the DataRow property of the TestContext to access the columns in your CSV for example: TestContext.DataRow["FirstName"].ToString(); Good luck.

How to use string.Format with LINQ Select

When I project new data types using the Select operator I sometimes want to create new strings from the combination of existing properties.  Naturally I turn to string.Format.  However, if you attempt a call like the one below: public object Get(int id) {    return this.db.People.Where(p => p.Id == id)                                         .OrderBy(p => p.LastName)                                         .Select(p => new { FullName = string.Format(“{0} {1}”, p.FirstName, p.LastName) })                                         .ToArray(); } I will get the following error: "LINQ to Entities does not recognize the method 'System.String Format(System.String, System.Object, System.Object)' method, and this method cannot be translated into a store expression." The problem is everything that happens before the .ToArray() is parsed and turned into a command that can be sent to the data source.  However, LINQ nor would the data source have any idea what to do with System.String.Format. There is a very simply solution.  Simply call .ToArray() before you use the Select operator. public object Get(int id) {    return this.db.People.Where(p => p.Id == id)                                         .OrderBy(p => p.LastName)                                         .ToArray()                                         .Select(p => new { FullName = string.Format(“{0} {1}”, p.FirstName, p.LastName)}); }   That simply change will perform the select after the results have been retrieved from the data source.

How to change your default language in Visual Studio

Problem: I want to change my default languge in Visual Studio. Solution: Select Tools / Import and Export Settings... Select Reset all settings Click Next > I suggest backing up your current settings just in case you want them back. Select the default Language setup you want to use Click Finish To verify click File/New Project.  Your desired language is selected by default.

Building Ubuntu Servers in Hyper-V

Download Ubuntu server Ubuntu Server 14.04 LTS ISO from http://www.ubuntu.com/download/server In Hyper-V Manager select New / Virtual Machine Before You Begin Click Next Specify Name and Location Name: <Stage> i.e. Dev, QA, Prod Click Next Specify Generation Select Generation 1.  If you don't you will not be able to mount the ISO for Ubuntu Click Next Assign Memory Startup memory: 512 Click Next Configure Networking Connect to the same external network with internet access used for your Windows VM Click Next Connect Virtual Hard Disk Select Create a virtual hard disk Click Next Installation Options Select Install an operating system from bootable CD/DVD-ROM Select Image file (.iso) and browse to Ubuntu Server iso. Click Next Summary Click Finish Start VM and connect to it. Language Select English Press Enter Ubuntu Select Install Ubuntu Server Press Enter Select a language Select English Press Enter Select your location Select United States Press Enter Configure the Keyboard (1) Select No Press Enter Configure the Keyboard (2) Select English (US) Press Enter Configure the Keyboard (3) Select English (US) Press Enter Configure the network Hostname: <serverName> Press Enter Set up users and passwords (1) Full name for the new user: <your name> Press Enter Set up users and passwords (2) Username for your account: <username> Press Enter Set up users and passwords (3) Choose a password for the new user: P2ssw0rd Press Enter Set up users and passwords (4) Re-enter password to verify: P2ssw0rd Press Enter Set up users and passwords (5) Encrypt your home directory: No Press Enter Configure the clock Is this time zone correct: Yes Press Enter Partition disks (1) Partitioning method: Guided - user entire disk and set up LVM Press Enter Partition disks (2) Press Enter Partition disks (3) Select Yes Press Enter Partition disks (4) Press Enter Partition disks (5) Select Yes Press Enter Configure the package manager HTTP proxy information: <leave blank> Press Enter Configuring tasksel Select No automatic updates Press Enter Software selection Select OpenSSH server Select Samba file server Select Tomcat Java server Press Enter Install the GRUB boot loader on hard disk Select Yes Press Enter Finish Installation Press Enter Login to server Type sudo apt-get update Your server is now installed and up to date. 

Never forget -Verbose again

Working with DSC I am constantly having to type –Verbose on my Start-DscConfiguration calls.  However, I stumbled across this cool feature of PowerShell that I thought I would share that will set the –Verbose switch on all your calls for you. PowerShell has a collection of Preference Variables that allow you to customize its behavior.  One such variable is named $VerbosePreference. By default the value of $VerbosePreference is “SilentlyContinue” which requires you to supply the –Verbose switch to see any verbose messages written by the cmdlet or function calls.  However, if you simply set: $VerbosePreference = “Continue” You can now forego passing the –Verbose switch but all the verbose messages will be displayed.  This combined with the positional Path parameter can drastically reduce the amount of typing for Start-DscConfiguration call. Before: Start-DscConfiguration –Wait –Verbose –Path .\Test After: Start-DscConfiguration .\Test –Wait Or you can go that extra mile and create a new Alias for Start-DscConfiguration: New-Alias –Name sdsc –Value Start-DscConfiguration Then all you have to type is: sdsc .\test -Wait Like all variables, the values that you set are specific to the current Windows PowerShell session. However, you can add them to your Windows PowerShell profile and have them set in all Windows PowerShell sessions.

PowerShell Desired State Configuration (DSC) Journey by Jacob Benson

I just wanted a list of all the post in one place. PowerShell Desired State Configuration (DSC) Journey by Jacob Benson: Day 1 (First Configuration) Day 2 (Parameterizing the Configuration) Day 3 Day 4 Day 5 Day 6 Day 7 Day 8 Day 9 Day 10 Day 11 Day 12 Day 13 Day 14 Day 15 Day 16 Day 17 Day 18 Day 19 Day 20 Day 21 Day 22 Day 23 Day 24 Day 25 Day 26 Day 27 Day 28 Day 29