Wednesday, February 11, 2015

"From complete chaos to Octopus Deploy" Part 3: The basic concepts of Octopus Deploy

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

The basic concepts of Octopus Deploy

The Octopus Deploy documentation is great when it comes to explaining every aspect of how to set up and use Octopus Deploy, so this should be your main reference when working with Octopus. I also found the Octopus 2.0 Training Videos to be very useful, they're short and precise, giving you exactly what you need.

But! There is one but: The examples in the documentation are based on the thought that one company has all their projects set up in its own Octopus installation. Which is great if that is how you use Octopus. In our case however, being a consultancy with over 100 customers spread across approximately 30 different hosting vendors, we wanted all our customer projects set up in the same Octopus installation. Which meant that we had to be absolutely sure that a consultant working with Customer A couldn't accidentally deploy Customer A's project to a machine that belonged to Customer B. As I briefly mentioned in the previous blog post, the setup would have to be foolproof. There is no room for error when dealing with automatic deployments.

If you recall the roadmap in my previous blog post, I selected three projects I would attempt to setup in Octopus Deploy. Let's take a closer look at the infrastructure of these projects:

Customer A:
Customer A had a single project, an ASP.NET MVC Application, running in three different environments:
- Our internal demo environment: Epinova.Demo
○ Machine: DemoServer
- The customers external test environment: Release.Test
○ Machine: CustomerA_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerA_ProductionServer

Customer B:
Customer B had an ASP.NET MVC Application and a Windows Service, running in three different environments:
- Our internal demo environment: Epinova.Demo
○ Machine: DemoServer
- The customers external test environment: Release.Test
○ Machine: CustomerB_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerB_ProductionServer

Customer C:
Customer C had an ASP.NET MVC Application and a WebForms application, running in two different environments where one is load balanced:
- The customers external test environment: Release.Test
○ Machine: CustomerC_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerC_ProductionServer1
○ Machine: CustomerC_ProductionServer2

The first challenge was figuring out how the infrastructure of these three customers would map to the basic concepts of Octopus Deploy. Let's take a short look at the basic concepts:

Environments

The Octopus documentation states that "an environment is a group of machines that you will deploy to at the same time; common examples of environments are Test, Acceptance, Staging or Production."

Based on this statement, I would have to create one environment in Octopus per customer environment, resulting in a very long list of environments that would look like this for our three example customers:
- Epinova.Demo
- Release.Test Customer A
- Release.Test Customer B
- Release.Test Customer C
- Release.Prod Customer A
- Release.Prod Customer B
- Release.Prod Customer C

The issue with this (apart from the extremely high number of environments we'd have to maintain when our 100 customers were to be added), is that most of our projects contain configuration transformations that Octopus Deploy would handle. These files were usually named *Epinova.Demo.config, *.Release.Test.config and *.Release.Prod.config and the Octopus documentation states that config transformation files either have to be names *.Release.config og *..config. In other words, we would have to rename the transformation files for almost all our projects to fit the environment name. 

I did not want to rename all out config transformations, and I certainly didn't want to maintain over 200 different environments. What I wanted was a short list of environments like this:
- Epinova.Demo
- Release.Test
- Release.Prod

This breaks the definition of an environment I quoted above as you would never deploy to all the machines in the Release.Test enviroment or the Release.Prod environment at the same time, as these machines belong to different customers. However, I found this setup to be the most logical, so I decided to try it out and not care that the definition stated otherwise. So I set up three enviroments: Epinova.Demo, Release.Test and Release.Prod and all the config transformation files could remain untouched as they already fit the *..config convention set by Octopus.

Machines

Machines are just that: Machines running your applications. It doesn't matter whether it's an Azure VM or a physical server, they're all viewed upon as "machines". A machine belongs to an environment (or several enviroments, if neccessary). For my three customers, the list of machines per environment would look like this:

Epinova.Demo machines:
- DemoServer

Release.Test machines:
- CustomerA_TestServer
- CustomerB_TestServer
- CustomerC_TestServer

Release.Prod machines:
- CustomerA_ProductionServer
- CustomerB_ProductionServer
- CustomerC_ProductionServer1
- CustomerC_ProductionServer2

Machine Roles

Looking at the list of machines per enviroment, we still have the issue of how we can make sure that Customer A's ASP.NET MVC Application is not deployed to the machine CustomerB_TestServer. This is where "Machine Roles" are very useful!

When setting up a deployment step, you can assign the step to a certain role. This means that if you have two machines, one with the role db-server and the other with the role web-server, you can configure your web application to only be deployed to the machine with the role web-server (You don't want your web applications running on your database server, right?).

In the same way, we can use machine roles to separate customers from each other. For our customer machines listed above we could add the following roles:

Release.Test machines:
- CustomerA_TestServer (role: custA-web)
- CustomerB_TestServer (role: custB-web)
- CustomerC_TestServer (role: custC-web  & custC-forms)

Release.Prod machines:
- CustomerA_ProductionServer  (role: custA-web)
- CustomerB_ProductionServer (role: custB-web)
- CustomerC_ProductionServer1  (role: custC-web)
- CustomerC_ProductionServer2 (role: custC-webcustC-forms)

So when setting up a deployment step for Customer A, we would scope that step to the machine role custA-web. This way it would be virtually impossible to deploy the project of one customer to the machine of another.

Notice that the CustomerC_ProductionServer2 machine has two roles: custC-web and custC-forms. Remember, Customer C has both an ASP.NET MVC Application and a WebForms application and in this case we want the MVC application to be deployed to both production servers while the WebForms application is only deployed to the second production server. This can be done by using several roles, where custC-web is used for determining where the MVC application will be deployed and custC-forms is used for determining where the WebForms application will be deployed.

Now that we've figured out how the infrastructure of these three customers would map to the basic concepts of Octopus Deploy, we can start by looking at the changes needed to be done in our applications to make this work. Stay tuned for tomorrows post: Required application changes!

No comments:

Post a Comment