Friday, November 20, 2015

Guest blog post for Women 2.0

Many women worry about having to choose between being a successful mother and being a successful career woman, and I sympathize with their concerns. Believe me, I've had them myself in the past and I still feel the concerns surfacing during extremely stressful periods. However, for the past 3 years I've combined them both, let's call it being a career mother. And I believe I have succeeded so far.

Pluralsight has previously written about this on their blog and created a testimonial video focusing on the topic. This landed me an opportunity to share my story with Women 2.0, and earlier this month, they published a post on how I spent my first maternity leave training in order to up my technical skills. You can read the post here:


Friday, October 23, 2015

Your definition of "legacy" impacts how quick you are to rewrite code

If you were to tell your colleague that a piece of code was legacy, would he or she understand what you were saying? Most likely, yes. What if you asked the same colleague to define the term legacy for you? You might receive any answer in return as there is no single, agreed upon definition of what legacy code is.

Those who have read "Working effectively with legacy code" by Michael C. Feathers (an excellent book all developers should read) are likely to argue that legacy code is code without unit tests, This, however, eliminates the possibility of code with unit tests being legacy and I can swear I've seen legacy code with satisfactory test coverage. In my recent developer survey, the definition of legacy was something I wanted to investigate so I asked the question "In you opinion, what makes code legacy?" with the following results:

The survey also contained a couple of questions regarding rewriting of code and going through the results, a question formed itself in my mind: Is there a connection between a developers definition of "legacy" and how quick they are to rewrite code? More specifically, how do developers agree with the statement "I have advised that a piece of code should be rewritten simply because rewriting it would be easier than having to figure out how the original code worked" depending on their definition of legacy?

Of those who define legacy code to be code lacking unit tests, 62.6% have advised that a piece of code should be rewritten simply because rewriting it would be easier than having to figure out how the original code worked. 

Of those who define code to be legacy as soon as it's in production, 73.7% have advised that a piece of code should be rewritten simply because rewriting it would be easier than having to figure out how the original code worked. 

Of those who define code to be legacy if someone other than themselves wrote it, 92.3% have advised that a piece of code should be rewritten simply because rewriting it would be easier than having to figure out how the original code worked. 

So your definition of "legacy" actually impacts how quick you are to rewrite code!

If you want to dive into the results of the survey yourself, you can find them here. If you're interested in hearing more on this subject, you can watch my talk "What is the acytual life expectancy of your code?" from Leetspeak earlier this month.

Thursday, October 15, 2015

What is the actual life expectancy of your code?

Last Friday, I flew to Stockholm with my fiancée and my 2 month old daughter to speak at Leetspeak. Leetspeak is a one day, affordable conference, set on a Saturday so that as many developers as possible can attend without having to take time of work. I've been super excited about this conference for quite a while now (you might remember me blogging about Leetspeak leading by example for female tech speakers) and the conference absolutely lived up to my expectations.

What really blew my mind were the organizers. They had complete control of everything, never leaving the attendees or speakers unsure of what to do, where to go or how to get there. They welcomed us with open arms, arranging everything from hotels, flights, baby seats, dinners, parties, you name it. It's clear that the key to a successful conference is proper organization and the crew from tretton37 are again leading by example.

The downside of bringing a 2 months old baby to a conference is that you never really get to focus on anything for more than 10 minutes at the time. Let's just say I look forward to watching all the talks over again as I was at the back most of the time, walking in circles, hoping she would sleep just a little bit longer. And of course you miss out on the parties, but the rumours say they were great.

Leetspeak, for me, was a wrap up of what I've been doing for the past months. In August, I created a developer survey that was featured in, and that survey has been the basis for my talk at Leetspeak. The poll results show some really interesting correlations between the definitions of "legacy" and how quick developers are to rewrite code, as well as other things. I'll be blogging more about this in the weeks to come. If you'd like to dig into the results yourself to see if you are able to find any other connections, you can do that here.

Or if you'd rather hear my conclusions, you can watch the recording of my talk "What is the actual life expectancy of your code?" below. Thank you to everyone who attended, and thank you for all the feedback you have provided!

Karoline Klever - What is the actual life expectancy of your code? from tretton37 on Vimeo.

Now that Leetspeak is over, what's next? Well, I have some interesting projects going on that I can't talk about just yet. The one I can mention is this: I was interviewed for an upcoming article on women in tech yesterday, I'm really looking forward to receive the first draft.

Monday, October 5, 2015

My Pluralsight testimonial video

Earlier this year, a production team from Pluralsight flew to Norway to film my testimonial video. They followed me around for two days, doing interviews and interacting with my family and colleagues. An awesome experience and the team from Pluralsight was great! I've previously blogged about the experience and shared some behind the scene footage.

The testimonial video has now been released and as strange as it is seeing myself, I love the result!

Friday, August 21, 2015

Interview on

I'm very excited that others have found interest in my developer survey lately, has even published an article about it.

Check it out!

Monday, August 17, 2015

The results of my developer survey

I'm currently working on a presentation called "What is the actual life expectancy of your code?" for Leetspeak in Stockholm in October. While researching this topic, I wrote down a series of questions I wanted to look further into, and last week I published a developer survey on Twitter that contained some of these questions.

The point of the survey was to get some statistics to back up the message of my presentation, and to see if other developers share my opinions and experiences. The number of responses I received surprised me, a total of 291 developers responded to the survey! Thank you so much to each and every one of you!

A lot of people asked whether I would publish the results, and my answer is: Of course! I won't make any comments to the results and my interpretations of them at this point in time as that would give away too much of my presentation. Neither will I publish the entire spreadsheet of responses before I'm done with my presentation, but I'm happy to publish the basic statistics right away. I must admit, I find some of them quite interesting!

As you can see, 63 people responded "Other" to this last question. Let's take a look at what their definition of legacy is:

"It's legacy to me if it's causing more trouble than it's fixing, which could be a combination of any of the above."

"I don't have a definition for legacy."

"It's legacy if you're scared to touch it for whatever reason"

"it is not in production or un-maintained 3rd party components "

"When code is written in such a way that only the one that orininally wrote it will ever understand it"

"Code is legacy if it is not in active development"

"I'd say it's legacy if that (part of the) code base is no longer maintained. For example, one module in an application can be legacy while the rest is being actively updated."

"It's legacy if the complexity is high and it's untested. Glue code doesn't always need tests."

"Whenever a developer doesnt care"

"As soon as it's written"

"If I wrote it more than 3 months ago"

"It's written in a language that is no longer supported or hasn't been updated/worked on for multiple years, or is written in a way which we no longer support - such as transitioning from web forms from mvc, web forms would be considered legacy"

"I don't know"

"noone dares to touch it"

"When it no longer works"

"It's legacy if it requires changing, and it proves difficult to refactor into a state from which that change is easy."

"If the codebase hasnt been touched in X years"

"It's legacy if it's using patterns and solutions that are now considered a poor solution, or just better ones exist."

"It's legacy when you're afraid to change it"

"Old and lacking documentation. Unsupported platforms"

"If it lacks any tests at all"

"It's not legacy if you're running it, it may be a debt of understanding."

"It's legacy when the team responsible for it is scared of changing it."

"It's legacy if it's unmaintainable"

"not continuously maintained, no docs, platform out of date"

"It's legacy if the code is there but the meaning is not."

"Evertime I open a file and it is not empty, it is legacy"

"It's legacy when there is no reasonable upgrade path to a modern system."

"It's legacy if we're afraid to change it"

"No team knowledge of the code"

"It's legacy if it's a solution in search of a problem (e.g. violates YAGNI)"

"It's legacy when it's easy to make mistakes while trying to understand or modify it."

"Nobody did a change to the code for 6 months."

"design has terrible and obvious flaws that impede maintainability"

"If no one fully understands what it does and how"

"After the first commit, it's legacy."

"When it's intent is no longer obvious (having newer frameworks help that, unit tests too, etc)"

"code should be living. Legacy is code you cannot change for technical reasons (e.g. no access) or because you don't understand what it does."

"if development on the project has stopped"

"It is legacy if I don't agree with how it was written."

"Poorly factored"

"If it has not been (properly)maintained"

"when it becomes too hard to maintain (adjust to current business needs, fix bugs, etc);;"

"if it can't be easily changed in the face of new requirements"

"It's legacy if we no longer recommend external people use it."

"Changes in Apis, data formats can all push code into legacy phase of life "

"When people responsible for this code has left"

"It's legacy if everyone is afraid to change it."

"It's legacy if it has stopped evolving new features while its maintainability has degraded from lack of use."

"Unsupported Technologies (i.e Oracle Forms 6)"

"as soon as the requirements change and code isn't updated accordenly due to too high investments"

"Another product has replaced it and still migrating systems/customers off it"

"If it is hard to understand, change or easily build on a new machine "

"If there's a newer version in production"

"It's legacy when it's about to be replaced but must live side by side with the replacement for a short while "

"It's legacy the moment it gets written"

"If compilers or other required build artifacts are no longer available"

"In general it is legacy as soon as it is very hard to maintain. Old languages/framework and a lack of unit tests is a factor, but mainly badly written code."

"no longer used or in the process of being replaced"

"If not regularly maintained"

Pretty interesting list, don't you think? So what do you make of these results? Did any of them come as a surprise to you?

Sunday, June 14, 2015

Women in STEM, silencing the opposition

The main news regarding women in STEM (science, technology, engineering and mathematics) last week was that of Nobel Prize winner and biochemist, Sir Tim Hunt putting his foot in his mouth at a conference in Seoul. He'd been asked to speak at a meeting about women in STEM and the not-so-enlightening words he managed to utter were:

"Let me tell you about my trouble with girls. Three things happen when they are in the lab. You fall in love with them, they fall in love with you, and when you criticise them, they cry."

I have to admit, I laughed in disbelief when I first read these words. What was he thinking? Was it an attempt at a bad joke? Or was this his honest opinion? Whatever his reason for saying these words, the reactions did not wait. Twitter exploded. Some of the reactions are quite constructive and entertaining, such at the #distractinglysexy hashtag, others are not worth quoting as they mainly consist of name-calling and distasteful language.

As a result of all this, Sir Tim Hunt has been forced to resign from his honorary researching position at the University College London (UCL) and from his science commitee seat at The European Research Council. His wife, who is a biologist, has also suffered a blow to her career as she is a professor at the same university. In short, his career is over.

Three sentences was all it took to end the career of a Nobel Prize winner. That scares me. That scares me a lot more than a male scientist criticising women in STEM. Is this what we're doing now? Throwing away great scientists because of one mistake, taking away their chance of further research into the field we all love so much?

Has women in STEM become such a controversial topic that anyone who is against gender equality, or anyone who has an opinion on diversity that isn't mainstream, should be silenced? Do we need to remind each other how history has played out with regard to the silencing of opposition?

In order to have a healthy discussion about women in STEM, or any topic for that matter, we must be able to communicate and listen to each other. Only then can we identify the true issues and find a solution that will last. Silencing those who have different opinions than us by using threats, name-calling and forced resignations is dangerous and might weaken the cause. What happens the next time you want to organize a meeting about women in STEM and no one dares speak up, because they're afraid of saying something that might be misinterpreted and cause them to lose their jobs? Whatever cause you're fighting for, if you silence your opposition by any of the measures mentioned above, you will most likely be hurting your cause more than helping it.

Why can't we all rise above public shaming? And why couldn't UCL have been content with a public apology? I was sad when I read the words of Sir Tim Hunt, but I'm equally sad to see the damaging effect those words have had on a great scientist.

Disclaimer: I call the words of Tim Hunt a mistake in my post, this is to make it easier for myself to express my opinions without having to elaborate too much. His words might have been deliberate, they might have been a mistake, that is not for me to say. To be clear, I am all for women in STEM and there's nothing I want more than gender equality. 

Tuesday, June 9, 2015

Leetspeak leading by example for female tech speakers

This morning, the Leetspeak conference website went live with their first speaker announcements:

I'm extremely excited and grateful to be on the speaker list, and to be honest I've been looking forward to this announcement for some time now. If you're following me on Twitter, you might already know that I'm currently on bed rest, expecting my second child to be born any day now. When she's born, I'll be home on maternity leave for eight months and as a career-loving woman, being "away" from the tech-scene for that long feels quite scary. 

When the Leetspeak organizers contacted me and asked if I would be interested in speaking at their conference, I gave them the whole "currently expecting a baby" speech, and I was overwhelmed by their response: Bring your baby and your fiancée, we'll organize everything!

I was so happy by this response, that I had to tweet about it (without revealing which conference it was as the annoucements were not out yet):

And that, ladies and gentlemen, is how you attract more women to the tech industry. Show them and help them to balance their career with their family life. What Tretton37 (the company behind Leetspeak) is doing here is in my opinion extraordinary and they're setting a fantastic example for other conferences out there. Let's hope this inspires others to do the same!

Do you have a similar experience? Please tell me about it so we can celebrate these amazing conferences!

Tuesday, April 14, 2015

Behind the scenes of my Pluralsight testimonial video

A little over a year ago, Pluralsight published an interview on their blog about how I spent my first maternity leave training. Since then a lot has happened. Long story short, my second maternity leave is just around the corner, and I've engaged in yet another interesting project with Pluralsight: A video testimonial.

Last week, a Pluralsight production team of four came to Norway from the US to film the video testimonial. They followed me around the office for a day and spent a day at home with my family doing interviews and filming our typical Saturday. Quite an experience considering I've never before been in front of a camera!

I'll share the video when it's published, meanwhile you can take a look at some of the photos taken during the filming:

A photo posted by Phil Hunter (@pbhunter) on

I had an amazing time shooting the video, this is without doubt the coolest experience I've had in my career so far! And it was all because of the wonderful team from Pluralsight, being such a fun gang despite having to deal with a jetlag and a slightely nervous me.

Thank you, Mariangel, Phil, Rick and Justin! And of course, a big thanks to Pluralsight for giving me this opportunity!

Monday, April 13, 2015

The epic fails of speakers I admire

After publishing my previous blog post about a speakers (second) worst nightmare, I've been overwhelmed by the support and encouragement I've received. I cannot express my gratefulness enough! It would have been easier not to blog about the experience and to pretend it never happened, but at the same time that would be a waste. I think it's better to put it all out there in case other speakers experience the same disappointment, that way they'll know they're not the only ones that have been unlucky.

During the previous month, several speakers have reached out to me and told me about their "speaker fails". Boy, do I wish I'd known about all of these episodes earlier! Knowing that so many of the speakers I admire and look up to have had similar experiences makes it all just a little less scary. I asked them if I could summarize their stories in a blog post and they all said yes, so here we go!

The episode that surprised me the most was the one told by Julie Lerman, who in my opinion is the greatest female role model in the tech industry:
She ran into some technical issues during her presentation, consequently giving her a disappointing rating. Speaking of technical issues, there are so many of them!

Ranging from BSODs...
... laptop and projector failure...

... not having the correct display adapter...
... and of course, no shows:
Imagine Hadi Hariri having a no show! That's just insane, if there's one speaker I'd never miss when I'm at a conference, it's Hadi.

Isn't is refreshing to see that so many of the amazing speakers out there have had these experiences? Apparently, they're all human and they've all had to fight through some troubles to get where they are today. Last but not least, I love how they are openly sharing these episodes, making it easier for the rest of us to handle our speaker failures. Thanks everyone!

As for my own speaker failure, I've got the same workshop booked at two other events in the next month and I've applied to become a Pluralsight author using the material from the workshop. So who knows, maybe this will all work out for the best?

Have you experienced any epic fails yourself that you'd like to share? I'd love to hear about them!

Wednesday, March 11, 2015

A speakers (second) worst nightmare

Imagine you've submitted a paper for a 3 hour workshop to a tech conference and it gets accepted. You spent hours upon hours, every night for weeks on end, preparing the workshop: Creating exercises, preparing discussions and of course outlining the required amout of theory. Prepping, fine tuning and making changes until you have to realize that you've satisfied your own expectations.

You fly across the country to the conference in question, super excited about hosting the workshop, hoping you've prepared enough. You head on over to the room you've been assigned quite early to get everything ready and make sure the room has everything you need. And the waiting begins. Everyone who's held a presentation knows that the last 15 minutes before it's time to start are the worst. The nerves begin to take over, you try not to think about what you're going to say as that will only make you forget everything you've actually planned on saying. What if no one likes it? What if someone asks a really difficult question you can't answer? 

Then the workshop participants start arriving. First one, then two. Three. Four... You check your watch and it's time to start. But there are only four participants? Well, numbers don't matter, the important thing is that you give these four participants your best effort! You introduce yourself and within a couple of minutes you mention .NET. One of the participants put their hand up:

Participant #3: "Excuse me, is this workshop aimed at .NET developers?" 
You: "Yes, unfortunately it is. It says so in the workshop description"
Participant #3: "Oh, sorry. I guess I'll have to find a different workshop then..."
Participant #4: "Crap, me too. I don't even have Windows installed."
The two participants leave and now there are two left. 
Participant #1: "Were we supposed to bring laptops? Because I don't have one". 
Participant #2: "Me neither"

Imagine that happening. Brutal, right? 
Well, it happened to me today and I'm heartbreakingly disappointed. But before I start getting mushy, let me tell you how this all ended. None of the participants had their laptops, and the majority of the workshop was based on the participants doing hands-on exercises. In other words: Without laptops, I really didn't have a 3 hours workshop to go on with. I asked them straight out, "Would you rather attend another workshop or would you like to me to show you the parts that do not require laptops?" They wanted me to go on and I did. I had planned on approximately 30 minutes of theory and the rest would be hands-on exercises and group discussions, so I had to throw all my material out the windows and wing it. We went on for an hour and half of some interesting discussions and theory, and then we called it the day. 
So here I am, at the airport on my way home. Disappointed and slightly angry, wondering how this could happen. I think I've gone through all the possible options:

Was the topic wrong? 
The topic was "Getting started with Octopus Deploy" and based on previous talks I've done on the subject and blog posts I've written, this is a topic that really seems to excite people. Also, I don't believe the workshop would have been accepted to the conference if the topic was wrong. 

Am I not a good enough speaker?
Maybe the conference participants looked at the agenda, saw my workshop and thought "Interesting topic, but I'm not a fan of the speaker". However, I'm not able to make that add up either. Most of the participants haven't ever heard me speak and the one lightning talk I did at the conference last year received good enough feedback. 

Was the audience wrong? 
The conference is said to be non-technology specific, meaning all types of developers attend. Whether it's Java, .NET or something completely different, everyone should be able to find something that appeals to them. However, my impression is that the majority of the developers were Java developers. Still though, you would think that the conference organizers take this into account when selecting their agenda. If they don't believe the potential audience to be large enough, they wouldn't accept the talk, right?

Or maybe it was bad luck? 
As none of the above fit, this is the option I'm left with. I'm not ready to accept that it was all bad luck, but I can't decide whether I have the right to be angry or if I just have to accept that this was not my day. Right now, I just want to board my flight so I can get home to my bed. 
The only thought keeping my motivation up right now is that this is only a speakers second worst nightmare. What's the worst? The worst is a room full of participants hating your workshop. At least that didn't happen.

Wednesday, February 25, 2015

The PayEx payment provider for EPiServer Commerce is now public!

I'm glad to announce that the new PayEx payment provider for EPiServer Commerce is now available in the EPiServer NuGet feed and and the source code is available on GitHub. The payment provider is called PayEx.EPi.Commerce.Payment and the full documentation is available on GitHub.

Supported payment methods
The provider supports several of the PayEx payment methods:
The prerequisites for the provider are the following: 
  • EPiServer.CMS version 7.6.3 or higher
  • EPiServer.Commerce version 7.6.1 or higher
  • .NET Framework 4.5 or higher

Thursday, February 19, 2015

"From complete chaos to Octopus Deploy" Part 8: Lessons learned and useful resources

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

Lessons learned

I usually find it quite easy to summarize the lessons learned after I've completed a task. You simply list all the mistakes you made that others are likely to make as well. In the case of introducing Octopus Deploy to our organization though, there have been very few mistakes to make. We've only had one problem, performance issues, and the cause of this issue lay outside of Octopus.


When I first set up Octopus, we didn't use the built-in NuGet feed, instead we used an external feed built using NuGet.Server. As the number of NuGet packages grew, we found that basic functions in Octopus such as creating a new release could take up to 10 minutes to finish. Not able to figure out where the root of the problem was, I had a Skype session with Paul and Vanessa at Octopus Deploy. It didn't take Paul long to identify our external NuGet feed at the performance killer. He explained to me that whenever Octopus requested a NuGet package, NuGet.Server loaded all packages found into memory. As we had no automatic cleanup of our NuGet feed, the number of packages loaded into memory grew quickly every day. Paul suggested we should switch to NuGet.Lucene as that indexes its packages or the built-in feed in Octopus. I decided to use the built-in feed and voilà! All the performance issues disappeared!

Supporting colleagues

My second lesson learned is that if you are in charge of introducing Octopus Deploy to an organization of a certain size, you will spend more time supporting your coworkers than you would think. Part of my job is to ensure the technical quality of the projects we deliver and to make my colleagues workdays as efficient as possible, hence Octopus Deploy. But some of my colleagues are extremely busy, and have very little time to sit down and read the Octopus Deploy documentation. So I've spent more time than planned on creating step-by-step guides, answering questions and teaching them the basic functions of Octopus Deploy.

Where are we at today?

Several times, I've mentioned that we have over 100 customers spread accross approximately 30 different hosting vendors, are all of them now using Octopus Deploy? Not yet, but we're getting there, one customer at the time.

We had two choises of how we could introduce Octopus Deploy to our customers:
1) Add all of them immediately
2) Add one by one gradually, by fitting the process into the customers schedule

We went with option 2 as this was the one we believed our customers would prefer, so now existing and new customers are set up on our Octopus Deploy server every week. And in the end, they'll all be present.

My estimates are that we've covered about 40% of our customers and hosting vendors so far. Approximately 75% of the developers at Epinova now use Octopus Deploy on a weekly basis, and so far I've heard no complaints, only praise and enthusiasm.

Useful resources

The best resources you can find are created by Octopus Deploy themselves:
- The documentation
- Training videos (I really enjoyed these! Make sure you also check out the Community videos on the bottom of this page)
- The Octopus Deploy API (now, THIS is what an API should look like!)

It's a wrap

That's it for this blog series, at least for now. I must say, the time I've spent planning and setting up Octopus Deploy has been a blast! My enthusiasm for this product is insane, and to me that proves they've done something right.

I would like to thank everyone at Octopus Deploy for the great work they're doing! I specifically want to thank Vanessa, Paul and Damian for the close ties they have to the developer community, I've never had to wait for an answer and it seems that at least one of them is active on Twitter at all times. Last, but not least, I'd like to thank everyone following this blog series, your kind words are heartwarming!

If you want to hear more about my journey towards automated deployments, I'm speaking at the EPiServer meetup in Oslo on March 3rd and the .NET User Group in Bergen on April 29th. I'm also doing a half-day workshop in Bergen in March on "Getting started with Octopus Deploy". Maybe I'll see some of you there?

Tuesday, February 17, 2015

"From complete chaos to Octopus Deploy" Part 7: Load balancing

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

Load balancing

I've received some questions as to how we handle load balanced scenarios in Octopus Deploy, so I decided to dedicate a post to the topic. 

Not all hosting vendors allow us to interact with their load balancers for a variety of reasons, where the main reason seems to be that they don't enjoy easing up on their control of the infrastructure. And that's fine. I mean, we've already 'forced' Octopus Deploy on them (although I believe they should be grateful), so I think it's only fair to let them hold on to their load balancers until they've gotten used to the idea and seen that Octopus is not an evil monster corrupting their servers.

Let's talk about the load balancers we have been allowed to play with instead! The first one out was the Windows Network Load Balancer (not a very good one, but that's a different story) that has it's own set of PowerShell cmdlets. As we were able to script against it we added two script steps to our deployment process, one removing a server from the load  balancer and one for adding the server back: 

If you're not familiar with the concept of child steps in Octopus Deploy, what it does in short is "allow you to wait for a step to finish on one machine before starting the step on the next machine." Read the Octopus Deploy documentation on Rolling Deployments for more information. So in the screenshot above, steps 1.1 to 1.4 will be finished on one machine before they are executed on the next.
You might be curious of what the "Remove from load balancer" and the "Add back to load balancer" steps look like. These both run a PowerShell script towards the Network Load Balancer using a couple of Octopus variables, and I've created these scripts as Gists: 
But what about those load balancers that don't have PowerShell cmdlets available, what to do then? Our hosting vendors have several creative solutions to these scenarios. 
Example 1
The load balancer monitors a port on the server and a site in IIS is set up with a binding towards that port. When the load balancer notices that the site is unavailable it drains all traffic to the server. When the site goes back up, the load balancer starts directing traffic to the server again. 
In this scenario, the PowerShell scripts for removing the server from load and adding it back simply have to stop and start a site in IIS.
Example 2
The load balancer monitors a file on the server. When the load balancer notices that the file is removed it drains all traffic to the server. When the file appears again, the load balancer starts directing traffic to the server again. In this scenario, the PowerShell scripts for removing the server from load and adding it back simply have to move a file to an alternative location and move it back to its original location afterwards.
So there are creative ways of interacting with the load balancer without actually scripting towards it directly. The main challenge with the two examples above is knowing when the load balancer is finished draining the traffic and the server has been removed from load, and knowing when the server is back in load. In these cases, I find that the hosting vendors have a lot more knowledge about how this can be done than I do. So my main advice is to talk to your hosting vendor and ask them for advice on how you can make this possible.
In tomorrows post it's time to wrap this blog series up with Lessons learned and useful resources

Monday, February 16, 2015

"From complete chaos to Octopus Deploy" Part 6: The hosting vendors

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

The hosting vendors

As I described in part 2 of this blog series, the first step in introducing Octopus Deploy to our organization was to set up several of our projects to be deployed automatically to our internal demo server. After seeing that the deployments worked as expected, we would start using Octopus Deploy towards our customer environments. Dealing with customer environments meant that we would have to start including the hosting vendors.

You might find it a bit risky to wait this long to involve the hosting vendors, what if they refused to let us use Octopus Deploy towards their environments? But I looked upon the challenge from a different perspective. By this time, we were actively using Octopus for about 5 of our projects (although only towards our internal demo server), meaning that approximately 8-10 of our consultants were using Octopus Deploy daily. There are strength in numbers, and I thought that 8-10 developers would have a greater chance of convincing the hosting vendors than me alone. Remember, we're dealing with over 30 hosting vendors, the list is quite long.

We started out with the ones we knew would be easy, the ones who always say yes and do everything we ask them. Before long, our Octopus server was connected to several test and production environments from a couple of different hosting vendors. Hooray!

I created a document titled "Octopus from a hosting vendors perspective" and asked the developers to distribute it to all the hosting vendors they were working with. All questions we received in return were included in the "Q&A" section of the document so others wouldn't need to ask the same questions. After reading this document, several more hosting vendors have us their thumbs up.

But now we had to deal with the large hosting vendors, the ones that are usually quite strict. The "head of customer relation management" at Epinova scheduled meetings with the largest ones where we showed them Octopus Deploy and described to them in detail how we wanted to use Octopus towards the customer environments they were hosting. Their main doubts were usually the same ones: Security and SLAs.


There's nothing insecure about Octopus Deploy, all communication between the Octopus server and its tentacles are done over HTTPS, and we always restrict the port the tentacles are listening to our office IP-address. We got all our security arguments confirmed when one of our customers, a leading company on web security in Norway, accepted our use of Octopus after analyzing the product themselves. Since then, we simply tell our hosting vendors that X approved it, and they have nothing they can say.


When it comes to SLAs, the hosting vendors were afraid that automatic deployments with Octopus Deploy would lead to more downtime. We explained to them that although we were automating the deployment processes, we would not automatically introduce continuous delivery to all our projects. The number of deployments per customer would stay the same as before, the deployments would just be faster and less error prone. In consequence, it would in fact lead to less downtime.

Some of the SLAs contain clauses stating that the hosting vendor should be notified when a deploy is done and that monitoring of the applications should be turned off beforehand. The hosting vendor has had difficulties enforcing these rules as develops tend to forget them when they're in a rush. So the hosting vendors were quite excited when we told them that we can automate this as part of the process as well in Octopus Deploy.

At the time of writing, not a single hosting vendor has forbid us from using Octopus Deploy. Only one has demanded we use polling tentacles, the rest have allowed listening tentacles. In my eyes, this is a great success and to be honest: I had expected a lot more resistance than this. Stay tuned for tomorrows post: Load balancing

Friday, February 13, 2015

"From complete chaos to Octopus Deploy" Part 5: The deployment process

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?
Part 1: The trigger of change 
Part 2: Where to begin? 
Part 3: The basic concepts of Octopus Deploy
Part 4: Required application changes

The deployment process

If you've been following this blog series from the beginning, you'll remember that my goal for introducing Octopus Deploy was to create a "standard" deployment process for all our projects. That process would look like this: 
As I've explained in earlier posts, the point of this blog series is not to show you step by step how to introduce the same process, the point of this blog series is to make you understand which decisions you need to make if you want to automate your deployment routines and how you can get there. 

So far I've explained how I mapped our customer projects to the basic features of Octopus Deploy, creating a simple structure that would not require too much change in the applications involved. I've shown you the application changes we did have to make, and in this blog post I'd like to discuss the deployment process itself. In other words, I want to take a closer look at the last step in the diagram above: "Deploy project". I won't explain the three steps prior to "Deploy project" as the Octopus documentation explaines these excellently without my help. 
Let's get to it then! What would happen in the "Deploy project" step? As all our customers have a web application, my first answer what the the "Deploy project" step would consist of the following: 
1) Deploy the web application to the chosen environment, creating or updating a site in IIS at the same time
2) Warm up the web application by running a powershell script that makes a request towards the web application checking the HTTP status code returned
But then I started thinking about the customers that not only have a simple web application, but also for example a windows service, and this step would have to be included: 
3) Deploy and install windows service
And what about load balanced scenarios? They would have to repeat these three steps for each machine in the environment. So already, it's clear that coming up with a default set of steps that would work for every single customer is impossible. And I doubt the developers would be very happy if they were forced to follow a set process for every single project when the possibilities in Octopus Deploy are so many! 
So I introduced a "best practice" instead: 
- Anything that is logical to automate, should be automated.
- All web applications that are deployed should be warmed up before the deployment can be viewed upon as successful.
Apart from this, the developers are free to decide what will happen in the "Deploy project" step. If you remember "Customer C" from "Part 3" of this blog series, they have an ASP.NET MVC Application and a WebForms application, running in two different environments where one is load balanced. Their deployment process now looks like this: 

Another customer might have a totally different deployment process, where the release notes are emailed to the customer in the first step or where the server monitoring is turned off before the application is deployed. 
The important lesson learned here is that you cannot force one process on 100 different customers. The customers will have preferences as to how the deployment process should look, and so will the hosting vendor and the developers. So customizing a deployment process for each customer, using a set of "best practices" as a basis had turned out a great way to go in our case. Speaking of hosing vendors, we'll be talking more about them in tomorrows post: The hosting vendors 

Thursday, February 12, 2015

"From complete chaos to Octopus Deploy" Part 4: Required application changes

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?
Part 1: The trigger of change 
Part 2: Where to begin? 
Part 3: The basic concepts of Octopus Deploy
Part 4: Required application changes
Part 5: The deployment process
Part 6: The hosting vendors
Part 7: Load balancing
Part 8: Lessons learned and useful resources

Required application changes

While I tried to minimize the amount of change needed for our applications, there were some changes we could not avoid. I won't go into detail on all of them, but I would like to outline them briefly in case you come across similar needs.

Installing OctoPack

As decribed in the Octopus documentation, the OctoPack NuGet builds Octopus Deploy-compatible NuGet packages from your projects. These packages are pushed to Octopus Deploy, and Octopus runs config transforms before deploying the package to the chosen environment. We added the OctoPack NuGet to all our projects, in addition to a default .nuspec file to supply some basic information about out applications. Having installed the Octopus TeamCity plugin, the plugin handles the rest: Running OctoPack for creating the packages.

Config transformations

Previous to Octopus Deploy, we ran all our config transformations on build. This is a problem as you will never have the exact same assemblies in any environment. Imagine I deployed code to my Test environment, in order to deploy the same code to my Production environment I would have to rebuild my code as the config transformations were run on build. Rebuilding the code would generate a new set of assemblies, which in theory would contain the same code as the assemblies deployed to Test, but they would not be identical, or the exact same files. 
After introducing Octopus Deploy, the config transformations would no longer be run on build. Instead the NuGet generated by OctoPack would contain all the config transformation files, and we would configure Octopus to run the config transformations:
In order to change when the config transformations were run, we needed to make a couple of changes to our applications: 

- Disable transformations on build by setting TransformOnBuild to false in our .csproj files
- Set the Build Action of all config transform files to Build Action = Content so that they would be included in the NuGet package generated by OctoPack

Deploy items

Our last challenge had to do with static files that should reside in the application root folder and that vary from environment to environment, but cannot be transformed. Some examples are robots.txt files (which you could transform if you converted to an xml, but we wanted to avoid that), and license.txt files.

Our solution was to add folders called DeployItems. and add the static files that belong to the given environment to this folder:

We then added a script module in Octopus that all the projects would reference. This script module contained a simple deploy script, that would look for a DeployItems folder postfixed with the environment name, and if found it would copy all the files in the folder to the application root folder:

function Move-DeployItems() {
    Write-Host "Checking if there are any DeployItems to move..."

    if(Test-Path -Path .\Configs\DeployItems.$OctopusEnvironmentName\){
        Write-Host "Moving deploy items from Configs\DeployItems.$OctopusEnvironmentName to root folder"
        Move-Item .\Configs\DeployItems.$OctopusEnvironmentName\* .\ -Force
    if(Test-Path -Path .\Configs\DeployItems.$OctopusEnvironmentName.$OctopusMachineName\){
        Write-Host "Moving deploy items from Configs\DeployItems.$OctopusEnvironmentName.$OctopusMachineName to root folder"
        Move-Item .\Configs\DeployItems.$OctopusEnvironmentName.$OctopusMachineName\* .\ -Force

    Write-Host "Deleting deploy items folders"
    Get-ChildItem .\Configs\DeployItems.* -Recurse | foreach ($_) {Remove-Item $_.fullname -Force -Recurse}
    Remove-Item .\Configs\DeployItems.* -Recurse

Notice that for load balanced enviroments, you could add a DeployItems.. folder if the files were different for all machines as well as all environments. To finish this off the DeployItems folders are deleted after the files are copied. 

With these minor changes: Adding OctoPack, disabling config transforms on build, and adding DeployItems folders for copying static files, our applications were all Octopus Deploy-compatible. Stay tuned for tomorrows post: The deployment process

Wednesday, February 11, 2015

"From complete chaos to Octopus Deploy" Part 3: The basic concepts of Octopus Deploy

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

The basic concepts of Octopus Deploy

The Octopus Deploy documentation is great when it comes to explaining every aspect of how to set up and use Octopus Deploy, so this should be your main reference when working with Octopus. I also found the Octopus 2.0 Training Videos to be very useful, they're short and precise, giving you exactly what you need.

But! There is one but: The examples in the documentation are based on the thought that one company has all their projects set up in its own Octopus installation. Which is great if that is how you use Octopus. In our case however, being a consultancy with over 100 customers spread across approximately 30 different hosting vendors, we wanted all our customer projects set up in the same Octopus installation. Which meant that we had to be absolutely sure that a consultant working with Customer A couldn't accidentally deploy Customer A's project to a machine that belonged to Customer B. As I briefly mentioned in the previous blog post, the setup would have to be foolproof. There is no room for error when dealing with automatic deployments.

If you recall the roadmap in my previous blog post, I selected three projects I would attempt to setup in Octopus Deploy. Let's take a closer look at the infrastructure of these projects:

Customer A:
Customer A had a single project, an ASP.NET MVC Application, running in three different environments:
- Our internal demo environment: Epinova.Demo
○ Machine: DemoServer
- The customers external test environment: Release.Test
○ Machine: CustomerA_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerA_ProductionServer

Customer B:
Customer B had an ASP.NET MVC Application and a Windows Service, running in three different environments:
- Our internal demo environment: Epinova.Demo
○ Machine: DemoServer
- The customers external test environment: Release.Test
○ Machine: CustomerB_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerB_ProductionServer

Customer C:
Customer C had an ASP.NET MVC Application and a WebForms application, running in two different environments where one is load balanced:
- The customers external test environment: Release.Test
○ Machine: CustomerC_TestServer
- The customers external production environment: Release.Prod
○ Machine: CustomerC_ProductionServer1
○ Machine: CustomerC_ProductionServer2

The first challenge was figuring out how the infrastructure of these three customers would map to the basic concepts of Octopus Deploy. Let's take a short look at the basic concepts:


The Octopus documentation states that "an environment is a group of machines that you will deploy to at the same time; common examples of environments are Test, Acceptance, Staging or Production."

Based on this statement, I would have to create one environment in Octopus per customer environment, resulting in a very long list of environments that would look like this for our three example customers:
- Epinova.Demo
- Release.Test Customer A
- Release.Test Customer B
- Release.Test Customer C
- Release.Prod Customer A
- Release.Prod Customer B
- Release.Prod Customer C

The issue with this (apart from the extremely high number of environments we'd have to maintain when our 100 customers were to be added), is that most of our projects contain configuration transformations that Octopus Deploy would handle. These files were usually named *Epinova.Demo.config, *.Release.Test.config and *.Release.Prod.config and the Octopus documentation states that config transformation files either have to be names *.Release.config og *..config. In other words, we would have to rename the transformation files for almost all our projects to fit the environment name. 

I did not want to rename all out config transformations, and I certainly didn't want to maintain over 200 different environments. What I wanted was a short list of environments like this:
- Epinova.Demo
- Release.Test
- Release.Prod

This breaks the definition of an environment I quoted above as you would never deploy to all the machines in the Release.Test enviroment or the Release.Prod environment at the same time, as these machines belong to different customers. However, I found this setup to be the most logical, so I decided to try it out and not care that the definition stated otherwise. So I set up three enviroments: Epinova.Demo, Release.Test and Release.Prod and all the config transformation files could remain untouched as they already fit the *..config convention set by Octopus.


Machines are just that: Machines running your applications. It doesn't matter whether it's an Azure VM or a physical server, they're all viewed upon as "machines". A machine belongs to an environment (or several enviroments, if neccessary). For my three customers, the list of machines per environment would look like this:

Epinova.Demo machines:
- DemoServer

Release.Test machines:
- CustomerA_TestServer
- CustomerB_TestServer
- CustomerC_TestServer

Release.Prod machines:
- CustomerA_ProductionServer
- CustomerB_ProductionServer
- CustomerC_ProductionServer1
- CustomerC_ProductionServer2

Machine Roles

Looking at the list of machines per enviroment, we still have the issue of how we can make sure that Customer A's ASP.NET MVC Application is not deployed to the machine CustomerB_TestServer. This is where "Machine Roles" are very useful!

When setting up a deployment step, you can assign the step to a certain role. This means that if you have two machines, one with the role db-server and the other with the role web-server, you can configure your web application to only be deployed to the machine with the role web-server (You don't want your web applications running on your database server, right?).

In the same way, we can use machine roles to separate customers from each other. For our customer machines listed above we could add the following roles:

Release.Test machines:
- CustomerA_TestServer (role: custA-web)
- CustomerB_TestServer (role: custB-web)
- CustomerC_TestServer (role: custC-web  & custC-forms)

Release.Prod machines:
- CustomerA_ProductionServer  (role: custA-web)
- CustomerB_ProductionServer (role: custB-web)
- CustomerC_ProductionServer1  (role: custC-web)
- CustomerC_ProductionServer2 (role: custC-webcustC-forms)

So when setting up a deployment step for Customer A, we would scope that step to the machine role custA-web. This way it would be virtually impossible to deploy the project of one customer to the machine of another.

Notice that the CustomerC_ProductionServer2 machine has two roles: custC-web and custC-forms. Remember, Customer C has both an ASP.NET MVC Application and a WebForms application and in this case we want the MVC application to be deployed to both production servers while the WebForms application is only deployed to the second production server. This can be done by using several roles, where custC-web is used for determining where the MVC application will be deployed and custC-forms is used for determining where the WebForms application will be deployed.

Now that we've figured out how the infrastructure of these three customers would map to the basic concepts of Octopus Deploy, we can start by looking at the changes needed to be done in our applications to make this work. Stay tuned for tomorrows post: Required application changes!

Tuesday, February 10, 2015

"From complete chaos to Octopus Deploy" Part 2: Where to begin?

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?

Where to begin? 

I had my task at hand: Automate the deployment process for more than 100 different customers spread across approximately 30 different hosting vendors. But where to begin?

I started out by installing the Octopus Deploy server so that I had somewhere to click around and explore my possibilities while I read up on the basic concepts of Octopus Deploy. However, it didn't take me long to realize that before I dug too deep into the technicalities, I needed some sort of goal and a roadmap. I asked myself: "How do I want a standard deployment process to look after I've introduced Octopus Deploy to the organization?"

From this a goal was created:

As you can see from the diagram we were already using Git for source control and TeamCity as a build server and luckily for us, Octopus Deploy and TeamCity go very well together.

So this was my goal, but I still needed a way to get there. Needless to say, this goal would have to be further specified later on as "Deploy project" could mean anything from "Remove server from load balancer, deploy web application, deploy windows service, add server to load balancer, repeat for remaining servers" to "Automatically email release notes to customer, deploy web application, warm up web application". But at this point, specifying exactly what would happen in the "Deploy project" step was impossible.

Creating a roadmap

Next up, I began planning a roadmap to reach this goal I'd set for our "Standard deployment process".

My roadmap turned out something like this:
1. Find the most "average" customer project we had
2. Try to make the project "fit" the deployment process I had in mind, documenting every step on the way.
3. If successful, repeat step 1-2 for a more complex project. Do this for at least three different projects varying in complexity.

I decided I would focus only on one environment to begin with, and that is an internal demo server we use for almost all our ongoing projects, called Epinova.Demo. If I reached my goal of setting up my "standard deployment process" for at least three different projects towards the Epinova.Demo environment, I would move on to the customers own environments: Test, Staging, Acceptance and Production. However, I wanted to make sure that the "standard deployment process" was foolproof before moving on to external environments.

Finally, I had a plan and a goal! I ran this by a couple of my colleagues to see if they had any input as to which projects would be good for testing this out, and we managed to find some great ones. I also asked them if they could see any immediate flaws with my goal of what our "standard deployment process" would look like, and they all gave me the thumbs up. Thanks guys!

But enough of the planning and setting goals, what we all want is to get technical isn't it? And that's exactly what we'll be doing! In tomorrows post: The basic concepts of Octopus Deploy

Monday, February 9, 2015

"From complete chaos to Octopus Deploy" Part 1: The trigger of change

Introduction to this blog series

How do you automate the non-existing deployment routines of an organization with over 100 different customers, each having their own environments? How do you convince the leaders, developers and customers to give you the resources needed in order to automate everything?  Is it really possible to introduce a routine that works for everyone?
Part 1: The trigger of change
Part 2: Where to begin?
Part 3: The basic concepts of Octopus Deploy
Part 4: Required application changes
Part 5: The deployment process
Part 6: The hosting vendors
Part 7: Load balancing
Part 8: Lessons learned and useful resources

The trigger of change

At the end of 2013, I had been working on a project for some time when I realized I dreaded every new change and feature the customer requested, but I couldn't quite understand why. I usually love developing new functionality, so I found it quite strange that I now felt the urge to slow the customer down. Suddenly it hit me, it wasn't the new functionality I dreaded, it was the deployment of the functionality I dreaded.

The project completely lacked deployment routines. Everything was done manually, and none of the developers felt they had full control of the responsibility of the applications, how the applications interacted with each other and how the hosting vendor had set all this up to work as it should. Every deploy felt like walking through a minefield, the risk of blowing something up was huge and there were regularly fires that needed to be put out because a deploy had gone wrong.

When dealing with manual deployments, you have no immediate knowledge of when the last deployment was done or who did it. It's time consuming and error prone, not to mention tedious. Developers don't want to spent their time copying files from one server to another, especially not when it comes to rollbacks.

I love the quote from Lao Tzu: "If you do not change direction, you may end up where you are heading." 

We were heading in the direction of flushing an otherwise great project down the drain because we weren't able to create routines for ourselves. I complained a lot about this to my fiancée who is also a developer, and he gave me the same answer every day: "Why don't you just automate everything?" I shrugged the answer away every time. Then I realized, that's what all the developers on the project were doing, we were all avoiding the issue instead of tackling it head on. So I decided it was time to take charge and change the direction we were going in.

The next time my fiancée said: "Why don't you just automate everything?" I answered: "But how? Where do I start?" He pointed me in the direction of a colleague of his who had introduced Octopus Deploy to one of his projects and I started looking at this product they all were so excited about. It didn't take me long to see that Octopus Deploy could deliver exactly that we needed, it was almost too good to be true!

Convincing the customer

Now that we had a solution to our issue, we had to convince the customer that the cost of automating the deployment routines would in fact reduce their overall cost of development. I realized our chances of convincing the customer would be larger if the hosting vendor was supporting our case, so I went to the hosting vendor with my plan. They were very positive as they had the same feeling of walking through a minefield as we had, and together we set up a draft of how this could be done.

We decided it would be wise to automate the deployment routines one step at the time, starting with the most basic application in the test environment before we moved on to the more complex applications. After successfully automating the test enviroment, we would move on to the production environment where we'd have to face the challenge of load balancers, SLAs and a more complex infrastructure in general. We were able to estimate the initial part of the job, automating the most basic application in the test environment, but beyond that we really couldn't give them any useful estimates as none of us were in complete control of how everything was set up.

That's a bit ironic, isn't it? In order to get complete control, we needed the customer to accept a guesstimate as we didn't have enough control to estimate accurately. Luckily, the customer trusted us and knew we would never try to fool them in any way, so they accepted our guesstimate and a start date for the automation project was agreed upon. I was thrilled!

When everything fails

But not for long. Unfortunately, things don't always work out as planned. As the automation project was about to begin, we recived orders that all development for this customer had to be paused indefinitely due to their financial situation. I was extremely disappointed, I'd been dreaming of Octopus Deploy for months and now I'd have to move on to some other project instead.

The same day, the CEO of the consultancy I work for (Epinova) asked to speak to me. I'd discussed my intent to automate the deployment routines with him earlier on and he knew how motivated I was for this task. His question baffled me: "Is there any way you could introduce Octopus Deploy to all our customers?"

Just to clarify what he was asking me. Epinova is a consultancy with (at the time) approximately 30 consultants. We had more than 100 different customers and for each customer we were responsible for at least 1 development project. The projects were spread across approximately 30 different hosting vendors.

So what he was actually asking me was: "Are we able to create an automated deployment routine that will suit all our customers? Can you spare our developers time by making them use Octopus Deploy? And are you able to convince all the hosting vendors to play along?"

My answer? "Yes. Yes. Yes."

How would I do all that? I had no idea... But I knew I'd love the challenge! Stay tuned for tomorrows post: Where to begin?