Wednesday, February 25, 2015

The PayEx payment provider for EPiServer Commerce is now public!

I'm glad to announce that the new PayEx payment provider for EPiServer Commerce is now available in the EPiServer NuGet feed and and the source code is available on GitHub. The payment provider is called PayEx.EPi.Commerce.Payment and the full documentation is available on GitHub.

Supported payment methods
The provider supports several of the PayEx payment methods:
Prerequisites
The prerequisites for the provider are the following: 
  • EPiServer.CMS version 7.6.3 or higher
  • EPiServer.Commerce version 7.6.1 or higher
  • .NET Framework 4.5 or higher
Enjoy!

Thursday, February 19, 2015

"From complete chaos to Octopus Deploy" Part 8: Lessons learned and useful resources

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

Lessons learned


I usually find it quite easy to summarize the lessons learned after I've completed a task. You simply list all the mistakes you made that others are likely to make as well. In the case of introducing Octopus Deploy to our organization though, there have been very few mistakes to make. We've only had one problem, performance issues, and the cause of this issue lay outside of Octopus.

Nuget.Server

When I first set up Octopus, we didn't use the built-in NuGet feed, instead we used an external feed built using NuGet.Server. As the number of NuGet packages grew, we found that basic functions in Octopus such as creating a new release could take up to 10 minutes to finish. Not able to figure out where the root of the problem was, I had a Skype session with Paul and Vanessa at Octopus Deploy. It didn't take Paul long to identify our external NuGet feed at the performance killer. He explained to me that whenever Octopus requested a NuGet package, NuGet.Server loaded all packages found into memory. As we had no automatic cleanup of our NuGet feed, the number of packages loaded into memory grew quickly every day. Paul suggested we should switch to NuGet.Lucene as that indexes its packages or the built-in feed in Octopus. I decided to use the built-in feed and voilà! All the performance issues disappeared!

Supporting colleagues

My second lesson learned is that if you are in charge of introducing Octopus Deploy to an organization of a certain size, you will spend more time supporting your coworkers than you would think. Part of my job is to ensure the technical quality of the projects we deliver and to make my colleagues workdays as efficient as possible, hence Octopus Deploy. But some of my colleagues are extremely busy, and have very little time to sit down and read the Octopus Deploy documentation. So I've spent more time than planned on creating step-by-step guides, answering questions and teaching them the basic functions of Octopus Deploy.

Where are we at today?


Several times, I've mentioned that we have over 100 customers spread accross approximately 30 different hosting vendors, are all of them now using Octopus Deploy? Not yet, but we're getting there, one customer at the time.

We had two choises of how we could introduce Octopus Deploy to our customers:
1) Add all of them immediately
2) Add one by one gradually, by fitting the process into the customers schedule

We went with option 2 as this was the one we believed our customers would prefer, so now existing and new customers are set up on our Octopus Deploy server every week. And in the end, they'll all be present.

My estimates are that we've covered about 40% of our customers and hosting vendors so far. Approximately 75% of the developers at Epinova now use Octopus Deploy on a weekly basis, and so far I've heard no complaints, only praise and enthusiasm.

Useful resources


The best resources you can find are created by Octopus Deploy themselves:
- The documentation
- Training videos (I really enjoyed these! Make sure you also check out the Community videos on the bottom of this page)
- The Octopus Deploy API (now, THIS is what an API should look like!)

It's a wrap


That's it for this blog series, at least for now. I must say, the time I've spent planning and setting up Octopus Deploy has been a blast! My enthusiasm for this product is insane, and to me that proves they've done something right.

I would like to thank everyone at Octopus Deploy for the great work they're doing! I specifically want to thank Vanessa, Paul and Damian for the close ties they have to the developer community, I've never had to wait for an answer and it seems that at least one of them is active on Twitter at all times. Last, but not least, I'd like to thank everyone following this blog series, your kind words are heartwarming!

If you want to hear more about my journey towards automated deployments, I'm speaking at the EPiServer meetup in Oslo on March 3rd and the .NET User Group in Bergen on April 29th. I'm also doing a half-day workshop in Bergen in March on "Getting started with Octopus Deploy". Maybe I'll see some of you there?

Tuesday, February 17, 2015

"From complete chaos to Octopus Deploy" Part 7: Load balancing

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

Load balancing

I've received some questions as to how we handle load balanced scenarios in Octopus Deploy, so I decided to dedicate a post to the topic. 

Not all hosting vendors allow us to interact with their load balancers for a variety of reasons, where the main reason seems to be that they don't enjoy easing up on their control of the infrastructure. And that's fine. I mean, we've already 'forced' Octopus Deploy on them (although I believe they should be grateful), so I think it's only fair to let them hold on to their load balancers until they've gotten used to the idea and seen that Octopus is not an evil monster corrupting their servers.

Let's talk about the load balancers we have been allowed to play with instead! The first one out was the Windows Network Load Balancer (not a very good one, but that's a different story) that has it's own set of PowerShell cmdlets. As we were able to script against it we added two script steps to our deployment process, one removing a server from the load  balancer and one for adding the server back: 



If you're not familiar with the concept of child steps in Octopus Deploy, what it does in short is "allow you to wait for a step to finish on one machine before starting the step on the next machine." Read the Octopus Deploy documentation on Rolling Deployments for more information. So in the screenshot above, steps 1.1 to 1.4 will be finished on one machine before they are executed on the next.
You might be curious of what the "Remove from load balancer" and the "Add back to load balancer" steps look like. These both run a PowerShell script towards the Network Load Balancer using a couple of Octopus variables, and I've created these scripts as Gists: 
But what about those load balancers that don't have PowerShell cmdlets available, what to do then? Our hosting vendors have several creative solutions to these scenarios. 
Example 1
The load balancer monitors a port on the server and a site in IIS is set up with a binding towards that port. When the load balancer notices that the site is unavailable it drains all traffic to the server. When the site goes back up, the load balancer starts directing traffic to the server again. 
In this scenario, the PowerShell scripts for removing the server from load and adding it back simply have to stop and start a site in IIS.
Example 2
The load balancer monitors a file on the server. When the load balancer notices that the file is removed it drains all traffic to the server. When the file appears again, the load balancer starts directing traffic to the server again. In this scenario, the PowerShell scripts for removing the server from load and adding it back simply have to move a file to an alternative location and move it back to its original location afterwards.
So there are creative ways of interacting with the load balancer without actually scripting towards it directly. The main challenge with the two examples above is knowing when the load balancer is finished draining the traffic and the server has been removed from load, and knowing when the server is back in load. In these cases, I find that the hosting vendors have a lot more knowledge about how this can be done than I do. So my main advice is to talk to your hosting vendor and ask them for advice on how you can make this possible.
In tomorrows post it's time to wrap this blog series up with Lessons learned and useful resources

Monday, February 16, 2015

"From complete chaos to Octopus Deploy" Part 6: The hosting vendors

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

The hosting vendors

As I described in part 2 of this blog series, the first step in introducing Octopus Deploy to our organization was to set up several of our projects to be deployed automatically to our internal demo server. After seeing that the deployments worked as expected, we would start using Octopus Deploy towards our customer environments. Dealing with customer environments meant that we would have to start including the hosting vendors.

You might find it a bit risky to wait this long to involve the hosting vendors, what if they refused to let us use Octopus Deploy towards their environments? But I looked upon the challenge from a different perspective. By this time, we were actively using Octopus for about 5 of our projects (although only towards our internal demo server), meaning that approximately 8-10 of our consultants were using Octopus Deploy daily. There are strength in numbers, and I thought that 8-10 developers would have a greater chance of convincing the hosting vendors than me alone. Remember, we're dealing with over 30 hosting vendors, the list is quite long.

We started out with the ones we knew would be easy, the ones who always say yes and do everything we ask them. Before long, our Octopus server was connected to several test and production environments from a couple of different hosting vendors. Hooray!

I created a document titled "Octopus from a hosting vendors perspective" and asked the developers to distribute it to all the hosting vendors they were working with. All questions we received in return were included in the "Q&A" section of the document so others wouldn't need to ask the same questions. After reading this document, several more hosting vendors have us their thumbs up.

But now we had to deal with the large hosting vendors, the ones that are usually quite strict. The "head of customer relation management" at Epinova scheduled meetings with the largest ones where we showed them Octopus Deploy and described to them in detail how we wanted to use Octopus towards the customer environments they were hosting. Their main doubts were usually the same ones: Security and SLAs.

Security

There's nothing insecure about Octopus Deploy, all communication between the Octopus server and its tentacles are done over HTTPS, and we always restrict the port the tentacles are listening to our office IP-address. We got all our security arguments confirmed when one of our customers, a leading company on web security in Norway, accepted our use of Octopus after analyzing the product themselves. Since then, we simply tell our hosting vendors that X approved it, and they have nothing they can say.

SLAs

When it comes to SLAs, the hosting vendors were afraid that automatic deployments with Octopus Deploy would lead to more downtime. We explained to them that although we were automating the deployment processes, we would not automatically introduce continuous delivery to all our projects. The number of deployments per customer would stay the same as before, the deployments would just be faster and less error prone. In consequence, it would in fact lead to less downtime.

Some of the SLAs contain clauses stating that the hosting vendor should be notified when a deploy is done and that monitoring of the applications should be turned off beforehand. The hosting vendor has had difficulties enforcing these rules as develops tend to forget them when they're in a rush. So the hosting vendors were quite excited when we told them that we can automate this as part of the process as well in Octopus Deploy.

At the time of writing, not a single hosting vendor has forbid us from using Octopus Deploy. Only one has demanded we use polling tentacles, the rest have allowed listening tentacles. In my eyes, this is a great success and to be honest: I had expected a lot more resistance than this. Stay tuned for tomorrows post: Load balancing

Friday, February 13, 2015

"From complete chaos to Octopus Deploy" Part 5: The deployment process

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?
Part 1: The trigger of change 
Part 2: Where to begin? 
Part 3: The basic concepts of Octopus Deploy
Part 4: Required application changes

The deployment process

If you've been following this blog series from the beginning, you'll remember that my goal for introducing Octopus Deploy was to create a "standard" deployment process for all our projects. That process would look like this: 
As I've explained in earlier posts, the point of this blog series is not to show you step by step how to introduce the same process, the point of this blog series is to make you understand which decisions you need to make if you want to automate your deployment routines and how you can get there. 

So far I've explained how I mapped our customer projects to the basic features of Octopus Deploy, creating a simple structure that would not require too much change in the applications involved. I've shown you the application changes we did have to make, and in this blog post I'd like to discuss the deployment process itself. In other words, I want to take a closer look at the last step in the diagram above: "Deploy project". I won't explain the three steps prior to "Deploy project" as the Octopus documentation explaines these excellently without my help. 
Let's get to it then! What would happen in the "Deploy project" step? As all our customers have a web application, my first answer what the the "Deploy project" step would consist of the following: 
1) Deploy the web application to the chosen environment, creating or updating a site in IIS at the same time
2) Warm up the web application by running a powershell script that makes a request towards the web application checking the HTTP status code returned
But then I started thinking about the customers that not only have a simple web application, but also for example a windows service, and this step would have to be included: 
3) Deploy and install windows service
And what about load balanced scenarios? They would have to repeat these three steps for each machine in the environment. So already, it's clear that coming up with a default set of steps that would work for every single customer is impossible. And I doubt the developers would be very happy if they were forced to follow a set process for every single project when the possibilities in Octopus Deploy are so many! 
So I introduced a "best practice" instead: 
- Anything that is logical to automate, should be automated.
- All web applications that are deployed should be warmed up before the deployment can be viewed upon as successful.
Apart from this, the developers are free to decide what will happen in the "Deploy project" step. If you remember "Customer C" from "Part 3" of this blog series, they have an ASP.NET MVC Application and a WebForms application, running in two different environments where one is load balanced. Their deployment process now looks like this: 

Another customer might have a totally different deployment process, where the release notes are emailed to the customer in the first step or where the server monitoring is turned off before the application is deployed. 
The important lesson learned here is that you cannot force one process on 100 different customers. The customers will have preferences as to how the deployment process should look, and so will the hosting vendor and the developers. So customizing a deployment process for each customer, using a set of "best practices" as a basis had turned out a great way to go in our case. Speaking of hosing vendors, we'll be talking more about them in tomorrows post: The hosting vendors 

Thursday, February 12, 2015

"From complete chaos to Octopus Deploy" Part 4: Required application changes

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?
Part 1: The trigger of change 
Part 2: Where to begin? 
Part 3: The basic concepts of Octopus Deploy
Part 4: Required application changes
Part 5: The deployment process
Part 6: The hosting vendors
Part 7: Load balancing
Part 8: Lessons learned and useful resources

Required application changes

While I tried to minimize the amount of change needed for our applications, there were some changes we could not avoid. I won't go into detail on all of them, but I would like to outline them briefly in case you come across similar needs.

Installing OctoPack

As decribed in the Octopus documentation, the OctoPack NuGet builds Octopus Deploy-compatible NuGet packages from your projects. These packages are pushed to Octopus Deploy, and Octopus runs config transforms before deploying the package to the chosen environment. We added the OctoPack NuGet to all our projects, in addition to a default .nuspec file to supply some basic information about out applications. Having installed the Octopus TeamCity plugin, the plugin handles the rest: Running OctoPack for creating the packages.

Config transformations

Previous to Octopus Deploy, we ran all our config transformations on build. This is a problem as you will never have the exact same assemblies in any environment. Imagine I deployed code to my Test environment, in order to deploy the same code to my Production environment I would have to rebuild my code as the config transformations were run on build. Rebuilding the code would generate a new set of assemblies, which in theory would contain the same code as the assemblies deployed to Test, but they would not be identical, or the exact same files. 
After introducing Octopus Deploy, the config transformations would no longer be run on build. Instead the NuGet generated by OctoPack would contain all the config transformation files, and we would configure Octopus to run the config transformations:
In order to change when the config transformations were run, we needed to make a couple of changes to our applications: 

- Disable transformations on build by setting TransformOnBuild to false in our .csproj files
- Set the Build Action of all config transform files to Build Action = Content so that they would be included in the NuGet package generated by OctoPack

Deploy items

Our last challenge had to do with static files that should reside in the application root folder and that vary from environment to environment, but cannot be transformed. Some examples are robots.txt files (which you could transform if you converted to an xml, but we wanted to avoid that), and license.txt files.

Our solution was to add folders called DeployItems. and add the static files that belong to the given environment to this folder:

We then added a script module in Octopus that all the projects would reference. This script module contained a simple deploy script, that would look for a DeployItems folder postfixed with the environment name, and if found it would copy all the files in the folder to the application root folder:

function Move-DeployItems() {
    Write-Host "Checking if there are any DeployItems to move..."

    if(Test-Path -Path .\Configs\DeployItems.$OctopusEnvironmentName\){
        Write-Host "Moving deploy items from Configs\DeployItems.$OctopusEnvironmentName to root folder"
        Move-Item .\Configs\DeployItems.$OctopusEnvironmentName\* .\ -Force
    }
    
    if(Test-Path -Path .\Configs\DeployItems.$OctopusEnvironmentName.$OctopusMachineName\){
        Write-Host "Moving deploy items from Configs\DeployItems.$OctopusEnvironmentName.$OctopusMachineName to root folder"
        Move-Item .\Configs\DeployItems.$OctopusEnvironmentName.$OctopusMachineName\* .\ -Force
    }

    Write-Host "Deleting deploy items folders"
    Get-ChildItem .\Configs\DeployItems.* -Recurse | foreach ($_) {Remove-Item $_.fullname -Force -Recurse}
    Remove-Item .\Configs\DeployItems.* -Recurse
}

Notice that for load balanced enviroments, you could add a DeployItems.. folder if the files were different for all machines as well as all environments. To finish this off the DeployItems folders are deleted after the files are copied. 

With these minor changes: Adding OctoPack, disabling config transforms on build, and adding DeployItems folders for copying static files, our applications were all Octopus Deploy-compatible. Stay tuned for tomorrows post: The deployment process

Wednesday, February 11, 2015

"From complete chaos to Octopus Deploy" Part 3: The basic concepts of Octopus Deploy

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

The basic concepts of Octopus Deploy

The Octopus Deploy documentation is great when it comes to explaining every aspect of how to set up and use Octopus Deploy, so this should be your main reference when working with Octopus. I also found the Octopus 2.0 Training Videos to be very useful, they're short and precise, giving you exactly what you need.

But! There is one but: The examples in the documentation are based on the thought that one company has all their projects set up in its own Octopus installation. Which is great if that is how you use Octopus. In our case however, being a consultancy with over 100 customers spread across approximately 30 different hosting vendors, we wanted all our customer projects set up in the same Octopus installation. Which meant that we had to be absolutely sure that a consultant working with Customer A couldn't accidentally deploy Customer A's project to a machine that belonged to Customer B. As I briefly mentioned in the previous blog post, the setup would have to be foolproof. There is no room for error when dealing with automatic deployments.

If you recall the roadmap in my previous blog post, I selected three projects I would attempt to setup in Octopus Deploy. Let's take a closer look at the infrastructure of these projects:

Customer A:
Customer A had a single project, an ASP.NET MVC Application, running in three different environments:
- Our internal demo environment: Epinova.Demo
○ Machine: DemoServer
- The customers external test environment: Release.Test
○ Machine: CustomerA_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerA_ProductionServer

Customer B:
Customer B had an ASP.NET MVC Application and a Windows Service, running in three different environments:
- Our internal demo environment: Epinova.Demo
○ Machine: DemoServer
- The customers external test environment: Release.Test
○ Machine: CustomerB_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerB_ProductionServer

Customer C:
Customer C had an ASP.NET MVC Application and a WebForms application, running in two different environments where one is load balanced:
- The customers external test environment: Release.Test
○ Machine: CustomerC_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerC_ProductionServer1
○ Machine: CustomerC_ProductionServer2

The first challenge was figuring out how the infrastructure of these three customers would map to the basic concepts of Octopus Deploy. Let's take a short look at the basic concepts:

Environments

The Octopus documentation states that "an environment is a group of machines that you will deploy to at the same time; common examples of environments are Test, Acceptance, Staging or Production."

Based on this statement, I would have to create one environment in Octopus per customer environment, resulting in a very long list of environments that would look like this for our three example customers:
- Epinova.Demo
- Release.Test Customer A
- Release.Test Customer B
- Release.Test Customer C
- Release.Prod Customer A
- Release.Prod Customer B
- Release.Prod Customer C

The issue with this (apart from the extremely high number of environments we'd have to maintain when our 100 customers were to be added), is that most of our projects contain configuration transformations that Octopus Deploy would handle. These files were usually named *Epinova.Demo.config, *.Release.Test.config and *.Release.Prod.config and the Octopus documentation states that config transformation files either have to be names *.Release.config og *..config. In other words, we would have to rename the transformation files for almost all our projects to fit the environment name. 

I did not want to rename all out config transformations, and I certainly didn't want to maintain over 200 different environments. What I wanted was a short list of environments like this:
- Epinova.Demo
- Release.Test
- Release.Prod

This breaks the definition of an environment I quoted above as you would never deploy to all the machines in the Release.Test enviroment or the Release.Prod environment at the same time, as these machines belong to different customers. However, I found this setup to be the most logical, so I decided to try it out and not care that the definition stated otherwise. So I set up three enviroments: Epinova.Demo, Release.Test and Release.Prod and all the config transformation files could remain untouched as they already fit the *..config convention set by Octopus.

Machines

Machines are just that: Machines running your applications. It doesn't matter whether it's an Azure VM or a physical server, they're all viewed upon as "machines". A machine belongs to an environment (or several enviroments, if neccessary). For my three customers, the list of machines per environment would look like this:

Epinova.Demo machines:
- DemoServer

Release.Test machines:
- CustomerA_TestServer
- CustomerB_TestServer
- CustomerC_TestServer

Release.Prod machines:
- CustomerA_ProductionServer
- CustomerB_ProductionServer
- CustomerC_ProductionServer1
- CustomerC_ProductionServer2

Machine Roles

Looking at the list of machines per enviroment, we still have the issue of how we can make sure that Customer A's ASP.NET MVC Application is not deployed to the machine CustomerB_TestServer. This is where "Machine Roles" are very useful!

When setting up a deployment step, you can assign the step to a certain role. This means that if you have two machines, one with the role db-server and the other with the role web-server, you can configure your web application to only be deployed to the machine with the role web-server (You don't want your web applications running on your database server, right?).

In the same way, we can use machine roles to separate customers from each other. For our customer machines listed above we could add the following roles:

Release.Test machines:
- CustomerA_TestServer (role: custA-web)
- CustomerB_TestServer (role: custB-web)
- CustomerC_TestServer (role: custC-web  & custC-forms)

Release.Prod machines:
- CustomerA_ProductionServer  (role: custA-web)
- CustomerB_ProductionServer (role: custB-web)
- CustomerC_ProductionServer1  (role: custC-web)
- CustomerC_ProductionServer2 (role: custC-webcustC-forms)

So when setting up a deployment step for Customer A, we would scope that step to the machine role custA-web. This way it would be virtually impossible to deploy the project of one customer to the machine of another.

Notice that the CustomerC_ProductionServer2 machine has two roles: custC-web and custC-forms. Remember, Customer C has both an ASP.NET MVC Application and a WebForms application and in this case we want the MVC application to be deployed to both production servers while the WebForms application is only deployed to the second production server. This can be done by using several roles, where custC-web is used for determining where the MVC application will be deployed and custC-forms is used for determining where the WebForms application will be deployed.

Now that we've figured out how the infrastructure of these three customers would map to the basic concepts of Octopus Deploy, we can start by looking at the changes needed to be done in our applications to make this work. Stay tuned for tomorrows post: Required application changes!

Tuesday, February 10, 2015

"From complete chaos to Octopus Deploy" Part 2: Where to begin?

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

Where to begin? 

I had my task at hand: Automate the deployment process for more than 100 different customers spread across approximately 30 different hosting vendors. But where to begin?

I started out by installing the Octopus Deploy server so that I had somewhere to click around and explore my possibilities while I read up on the basic concepts of Octopus Deploy. However, it didn't take me long to realize that before I dug too deep into the technicalities, I needed some sort of goal and a roadmap. I asked myself: "How do I want a standard deployment process to look after I've introduced Octopus Deploy to the organization?"

From this a goal was created:


As you can see from the diagram we were already using Git for source control and TeamCity as a build server and luckily for us, Octopus Deploy and TeamCity go very well together.

So this was my goal, but I still needed a way to get there. Needless to say, this goal would have to be further specified later on as "Deploy project" could mean anything from "Remove server from load balancer, deploy web application, deploy windows service, add server to load balancer, repeat for remaining servers" to "Automatically email release notes to customer, deploy web application, warm up web application". But at this point, specifying exactly what would happen in the "Deploy project" step was impossible.

Creating a roadmap

Next up, I began planning a roadmap to reach this goal I'd set for our "Standard deployment process".

My roadmap turned out something like this:
1. Find the most "average" customer project we had
2. Try to make the project "fit" the deployment process I had in mind, documenting every step on the way.
3. If successful, repeat step 1-2 for a more complex project. Do this for at least three different projects varying in complexity.

I decided I would focus only on one environment to begin with, and that is an internal demo server we use for almost all our ongoing projects, called Epinova.Demo. If I reached my goal of setting up my "standard deployment process" for at least three different projects towards the Epinova.Demo environment, I would move on to the customers own environments: Test, Staging, Acceptance and Production. However, I wanted to make sure that the "standard deployment process" was foolproof before moving on to external environments.

Finally, I had a plan and a goal! I ran this by a couple of my colleagues to see if they had any input as to which projects would be good for testing this out, and we managed to find some great ones. I also asked them if they could see any immediate flaws with my goal of what our "standard deployment process" would look like, and they all gave me the thumbs up. Thanks guys!

But enough of the planning and setting goals, what we all want is to get technical isn't it? And that's exactly what we'll be doing! In tomorrows post: The basic concepts of Octopus Deploy

Monday, February 9, 2015

"From complete chaos to Octopus Deploy" Part 1: The trigger of change

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?
Part 1: The trigger of change
Part 2: Where to begin?
Part 3: The basic concepts of Octopus Deploy
Part 4: Required application changes
Part 5: The deployment process
Part 6: The hosting vendors
Part 7: Load balancing
Part 8: Lessons learned and useful resources

The trigger of change

At the end of 2013, I had been working on a project for some time when I realized I dreaded every new change and feature the customer requested, but I couldn't quite understand why. I usually love developing new functionality, so I found it quite strange that I now felt the urge to slow the customer down. Suddenly it hit me, it wasn't the new functionality I dreaded, it was the deployment of the functionality I dreaded.

The project completely lacked deployment routines. Everything was done manually, and none of the developers felt they had full control of the responsibility of the applications, how the applications interacted with each other and how the hosting vendor had set all this up to work as it should. Every deploy felt like walking through a minefield, the risk of blowing something up was huge and there were regularly fires that needed to be put out because a deploy had gone wrong.

When dealing with manual deployments, you have no immediate knowledge of when the last deployment was done or who did it. It's time consuming and error prone, not to mention tedious. Developers don't want to spent their time copying files from one server to another, especially not when it comes to rollbacks.

I love the quote from Lao Tzu: "If you do not change direction, you may end up where you are heading." 

We were heading in the direction of flushing an otherwise great project down the drain because we weren't able to create routines for ourselves. I complained a lot about this to my fiancée who is also a developer, and he gave me the same answer every day: "Why don't you just automate everything?" I shrugged the answer away every time. Then I realized, that's what all the developers on the project were doing, we were all avoiding the issue instead of tackling it head on. So I decided it was time to take charge and change the direction we were going in.

The next time my fiancée said: "Why don't you just automate everything?" I answered: "But how? Where do I start?" He pointed me in the direction of a colleague of his who had introduced Octopus Deploy to one of his projects and I started looking at this product they all were so excited about. It didn't take me long to see that Octopus Deploy could deliver exactly that we needed, it was almost too good to be true!

Convincing the customer

Now that we had a solution to our issue, we had to convince the customer that the cost of automating the deployment routines would in fact reduce their overall cost of development. I realized our chances of convincing the customer would be larger if the hosting vendor was supporting our case, so I went to the hosting vendor with my plan. They were very positive as they had the same feeling of walking through a minefield as we had, and together we set up a draft of how this could be done.

We decided it would be wise to automate the deployment routines one step at the time, starting with the most basic application in the test environment before we moved on to the more complex applications. After successfully automating the test enviroment, we would move on to the production environment where we'd have to face the challenge of load balancers, SLAs and a more complex infrastructure in general. We were able to estimate the initial part of the job, automating the most basic application in the test environment, but beyond that we really couldn't give them any useful estimates as none of us were in complete control of how everything was set up.

That's a bit ironic, isn't it? In order to get complete control, we needed the customer to accept a guesstimate as we didn't have enough control to estimate accurately. Luckily, the customer trusted us and knew we would never try to fool them in any way, so they accepted our guesstimate and a start date for the automation project was agreed upon. I was thrilled!

When everything fails

But not for long. Unfortunately, things don't always work out as planned. As the automation project was about to begin, we recived orders that all development for this customer had to be paused indefinitely due to their financial situation. I was extremely disappointed, I'd been dreaming of Octopus Deploy for months and now I'd have to move on to some other project instead.

The same day, the CEO of the consultancy I work for (Epinova) asked to speak to me. I'd discussed my intent to automate the deployment routines with him earlier on and he knew how motivated I was for this task. His question baffled me: "Is there any way you could introduce Octopus Deploy to all our customers?"

Just to clarify what he was asking me. Epinova is a consultancy with (at the time) approximately 30 consultants. We had more than 100 different customers and for each customer we were responsible for at least 1 development project. The projects were spread across approximately 30 different hosting vendors.

So what he was actually asking me was: "Are we able to create an automated deployment routine that will suit all our customers? Can you spare our developers time by making them use Octopus Deploy? And are you able to convince all the hosting vendors to play along?"

My answer? "Yes. Yes. Yes."

How would I do all that? I had no idea... But I knew I'd love the challenge! Stay tuned for tomorrows post: Where to begin?