Wednesday, December 5, 2012

Epinova.CRMFramework 2.0 Beta version has been released

A couple of days ago, I was finally ready to release the Beta version of Epinova.CRMFramework 2.0 on CodePlex. Unfortunately, the release came with a couple of broken promises, but I believe my excuses are indisputable (at least for now).

When I released the Alpha version I wrote a blog post similar to this, listing the new requirements and changes made to the framework. The requirements have not changed for the Beta version, but a couple of my goals and plans for the framework have.

My initial plans were to replace the CrmQuery system with LINQ support in order to do strongly typed queries in a syntax well known to all developers. Unfortunately, this has not been done, and therefore I’ve had to reinstate CrmQuery. This means that the querying in v2.0 of Epinova.CRMFramework works exactly the same way as it did in version 1.0. If you are completely lost right now, take a look at the documentation.

The good news is that the CrmEntityController<T>.Find() method is back and so is the CrmManyToManyRelationshipController<T, V>. This means that Epinova.CRMFramework 2.0 is complete with the features found in v1.0.

So why did I not rewrite the querying as I said I would? Short answer: Lack of time. This framework is not something I get paid for, it’s a project I play around with on my spare time. As I have a full-time job, it’s only natural that I prioritize my employer and my customers rather than my toys. The good thing is, it’s free and it’s open source, which means that you’re free to add the LINQ support yourself if you want to!

Feel free to contact me if you have any comments or questions Smile

Monday, October 15, 2012

My experiences from migrating EPiServer Relate+ 1 to EPiServer Relate+ 2

Recently I’ve been migrating an EPiServer Relate+ 1 site to EPiServer Relate+ 2, and I think it might be useful for others contemplating doing the same to get an overview of the troubles I had on the way.

First of all, some background information. The website I migrated was based on the Relate+ 1 template package, with some modifications and additional integrations. I decided to go through with the migration in the following steps:

1) Migrate both database and code in the development environment. I then did some brief testing to ensure that all data had been migrated.

2) Migrate the database in the test environment, and then deploy the code from step 1. I then let the customer test the site thoroughly while I did some bugfixing.

3) Migrate the database in the production environment, and then deploy the code from step 1.

Useful resources:

My main resource for migrating the site was the ”Migrate EPiServer Relate+ to EPiServer Relate+ 2” article written by Klas Wikström. What I especially appreciated about his way of doing it was that you end up with two databases, one for CMS content and one for Community content.

When it came to the code migrations needed in order to make the site run on Community 4, I was in for quite a surprise as there were a lot of them! Some I could fix with a simple “Find and Replace”, but unfortunately many of them involve changes in method definitions, for example where the parameters have switched places. Either way, the “EPiServerCommunity code migration” section of the “Migrating EPiServer Community 3.2 to 4.0 and EPiServer Mail 4.4 to 5.0” tech note will tell you everything you need.

Bumps in the road:

1) SQL Error when running the “migration_tool.sql” script:

The INSERT statement conflicted with the FOREIGN KEY constraint "FK_tblEPiServerCommunityClubMember_tblClub". The conflict occurred in database "dbFKWorldCommunity", table "dbo.tblEPiServerCommunityClub", column 'intID'.
The statement has been terminated.”

I got about 5 of these errors the first time I ran the migration script. After a couple of rollbacks and some debugging, I figured I could change the failing script from “SELECT intID FROM” to “SELECT TOP 1 intID FROM”. Turns out the script expects one record, but in my case several were found. So I rewrote the script a bit, if you want to take a look at it you can find it here (note: I cannot guarantee that this will work for you, but for me it did the trick).

2) Error when installing a new Relate+ 2 site:

Error - Exception calling ‘WorkerProcessAccounts’ with ‘1’ argument(s): ‘Invalid application pool name’

I found the solution for this error on the forum (see Tommy’s solution).

Final words of advice:

1) Document every single step on the way! As I, in reality, did the migration three times (once in each environment) it was very useful to keep a journal of every single step on the way to make sure I didn’t forget a small detail.

2) Backup your database before you start a new step! There are quite a lot of things that can go wrong, and I had to roll back three or four times during the first migration. If I’d forgotten a database backup, I would have been in a lot of trouble.

3) Take your time! A migration is not something you do in a couple of hours. This migration included moving the site into a new environment which took some extra time, but still you should set aside at least a day per migration (and that’s not including code migrations).

4) Breath. The migration script can run for ages while you bite your nails wondering if you’re about to mess everything up. Go get a coffee, don’t sit there staring at the “Executing query” message waiting for it to explode. Most likely, everything will be fine.

Friday, August 3, 2012

How to crash the page structure in edit mode

A quite interesting bug was reported recently on one of the CMS 6 R2 websites I’ve been working on. When expanding a certain node in the page structure in edit mode, the following error message appeared:

System.UriFormatException: Invalid URI: A port was expected because of there is a colon (':') present but the port could not be parsed.

imageAs annoying as that error message is, the most annoying part was that the message appeared in the frame where the page structure usually is.

This resulted in us being unable to navigate the page structure and therefore not being able to find the page responsible for the error message. I tried finding the page by clicking around in View mode, but that’s just a waste of time fumbling in the dark. I also tried adding the page ID directly to the URL, but the error message appeared on all pages in this part of the structure (and that’s a lot of pages) so it was impossible to figure out exactly which page was responsible.

This leaves debugging the database as the last option, hooray! From the error message and poking around in Resharper I found that the LinkURL property of one of the pages probably was in the wrong format (containing a ‘:’ from the looks of the message). So I did a database query in order to find all pages with LinkURLs containing a ‘:’

SELECT * FROM [dbo].[tblWorkPage] WHERE [LinkURL] like '%:%'

This resulted in 200 results, too many for my taste so I wanted to narrow down the result. As I knew this problem only occurred in a certain part of the page structure I wanted to find out how many pages existed in that part. By looking at the pageID’s I could figure out the PagePath in the database and query all pages beginning with this PagePath:

SELECT * FROM [dbo].[tblPage] WHERE [PagePath] like '.1.91927.91932.43806.%'

This resulted in 188 results, still too many. But now I knew two things, I knew all the pages in the part on the structure that was causing a problem, and I knew all the pages containing a ‘:’ in their LinkURL property. By joining these two queries, I could now find all pages in the problematic part of the structure containing a ‘:’ in their LinkURLs:

SELECT W.pkID, W.fkPageID, W.Name, W.URLSegment, W.LinkURL, W.ExternalURL, P.PagePath FROM [dbo].[tblWorkPage] AS W INNER JOIN [dbo].[tblPage] AS P ON W.fkPageID = P.pkID WHERE P.PagePath like '.1.91927.91932.43806.%' AND W.LinkURL like '%:%'

Finally, this gave me 10 results and I could go through these results manually. Looking through all the LinkURL values I found the problem:

I’m stunned that one simple typo like this can take down a part of the site structure! And guess what, the only way of fixing it is directly in the database, how’s that for a friday? First of all, I updated the LinkURL value in tblWorkPage:

UPDATE [dbo].[tblWorkPage] SET [LinkURL] = '' WHERE [LinkURL] like '' GO

Then, as globalization was enabled on my website, I needed to do the same thing in tblPageLanguage (Thanks to Tore Gjerdrum for pointing this out for me):

UPDATE [dbo].[tblPageLanguage] SET [LinkURL] = '' WHERE [LinkURL] like '' GO

Would you like to try this at home?
Note: This will crash a part of your website (or the whole thing if you’re really destructive and pick the start page in step #1). I take no responsibility for any of your actions or your angry boss/customer/colleagues!

Note 2: I’ve only tried this in CMS 6 R2

1) Choose a page you don’t like on your website
2) Set a shortcut on the page to for example:
3) Watch as you look forward to some SQL fun!


1) Åge Reinås just gave me a hot tip! In order to find the page responsible for the problems, you can check out the page structure under Access Rights in Admin mode. If you hover over the page names in the structure, the page responsible will show a faulty url. For example: “”

2) Thanks to @stianvhagen for the hillarious meme he created after I published this post:

Friday, June 29, 2012

Microsoft Reportviewer 2010 javascript error

Lately I’ve been looking into Microsoft Reportviewer 2010 for displaying customer reports on an EPiServer site. It’s a quite straight forward task and it should normally not create any problems.

However, deploying the code to the test server did just the opposite, strange issues appeared. When trying to view a report I got a javascript error:

Message: Object required
Line: 2980
Char: 13
Code: 0
URI: http://mydomain/Reserved.ReportViewerWebControl.axd?OpType=Resource&Version=10.0.30319.1&Name=ViewerScript

Debugging this error I found that it was thrown from the bold line in ReportViewers builtin ViewerScript, where regionElement turned out to be null:

SetSingleRegionVisibility: function(regionElementId, makeVisible)
    var regionElement = $get(regionElementId);
    if (makeVisible) = "";
    else = "none";

Right, now what? I checked everything. I checked that the Reportviewer HTTP handler I had added to my web.config was correct and that all the correct assemblies had been included. I whined about it on Twitter and to a colleague, but whining did surprisingly not fix the error.

I’d taken a look in the GAC several times to check that the assemblies were there as well, but then it suddenly hit me. On the test server, I had installed the Microsoft Reportviewer 2010 redistributables, while on my local machine I found the Microsoft Reportviewer 2010 SP1 redistributables. I updated the assemblies on the test server to SP1 and the problems were solved, hooray!

So much annoyance for so little!

Wednesday, April 18, 2012

If everyone decides to throw their moral out of the window, won’t we all be certified idiots?

The first year I worked as a developer, I passed my first two Microsoft certifications. I studied for months, reading the Self-paced training kit books from cover to cover and doing the exercises over and over again. I spent a lot of time studying and and I was nervous as hell. I was fresh out of university and I wanted to show my employer that I could get a great score.

Did I get a great score? Yes, on one of them, but the other one I barely passed. I was proud of myself, damn proud! That was until I got the question: “Which brain-dump did you use?”. Say what? I didn’t know what a brain-dump was, but soon enough the person asking me the question explained it to me. I was shocked, he had assumed I cheated on my certifications! A while later I realized that he hadn’t made this assumption on the basis of “the unlikely event” of me passing the certifications, he made them because “everyone uses brain-dumps”. That shocked me even more. If everyone decides to throw their moral out of the window, won’t we all be certified idiots? If we all could call ourselves “Queen of England”, wouldn’t the title lose its value?

Yes, it would. And this is exactly what has happened with the Microsoft certifications. NetworkWorld did an analysis in 2008 where they found that Microsoft was the Most Braindumped Certification Vendor. And all this despite Microsofts strict exam policy stating that a candidate may be banned if he/she is “Using unauthorized material in attempting to satisfy certification requirements (this includes using "brain-dump" material and/or unauthorized publication of exam questions with or without answers)” (Source:

But who’s responsibility is it to make sure the candidates are not using brain dumps?

It’s my responsibility
I will never throw my moral out of the window and become a certified idiot. I respect myself and my job too much to resort to brain-dumps. I expect my colleagues and fellow developers to do the same.

It’s my employers responsibility
My employer should be critical to the score I present them with. If I am able to complete four certifications in a month, all with a 100% score, I expect to be questioned.

It’s the customers responsibility
The customer should be critical to the number of certified developers their supplier presents to them. Unfortunately some companies encourage their employees to use brain-dumps, it apparently saves them time and money.

It’s Microsofts responsibility
Microsoft should continue to sue the brain-dump vendors, but this will probably not be sufficient enough. They should also continue to do statistical analysis of the certifications so that the cheaters can be identified and banned. Last, but not least: They need to keep their certifications up to speed, constantly improving them and making them more difficult to spread.

If you’re interested in some more blog posts about this topic, I’d recommend having a look at these:

Tuesday, April 17, 2012

Trying out the SocialCast REST API

Lately, I’ve been integrating an EPiServer site with SocialCast, a social network for businesses. I thought I’d share some code on how I chose to do this.

SocialCast has quite an extensive REST API which allows the developer to retrieve pretty much all the information needed. In this blog post I’ll show you how to retrieve the most recent status updates and how to post a new message to SocialCast.

The data from SocialCast can either be retrieved as JSON or XML, I’ve chosen JSON for these examples. 
2) I’ve used in these examples. In order to click on the links I’ve supplied further down you need to be logged into the demo site (username and password is supplied next to login box).

First, let’s create a class called WebRequester which I’ve made internal as I only want it to be accessible from within the same assembly. This class contains two methods, the first being GetRequestInJson which creates a WebRequest towards SocialCast and returns the response. The second method is a generic method called DeserializeResponse which uses a JavaScriptSerializer to deserialize the HttpWebResponse to an object.

1: internal static class WebRequester

2: {

3:    public static HttpWebResponse GetRequestinJson(string url, string username, string password, string method)

4:    {

5:       try

6:       {

7:          var webRequest = WebRequest.Create(url) as HttpWebRequest;

8:          if (webRequest != null)

9:          {

10:            webRequest.Credentials = new NetworkCredential(username, password);

11:            webRequest.Method = method;

12:            webRequest.ServicePoint.Expect100Continue = false;

13:            webRequest.Timeout = 20000;

14:            webRequest.ContentType = "application/json";

15:            return (HttpWebResponse)webRequest.GetResponse();

16:         }

17:      }

18:      catch (Exception e)

19:      {

20:          // Implement your own error handling (logging etc.)

21:      }

22:      return null;

23:    }


25:    public static T DeserealizeResponse<T>(HttpWebResponse httpWebResponse) where T : new()

26:    {

27:       try

28:       {

29:          Stream responseStream = httpWebResponse.GetResponseStream();

30:          if (responseStream != null)

31:          {

32:             string responseBody;

33:             using (StreamReader sr = new StreamReader(responseStream))

34:             {

35:                responseBody = sr.ReadToEnd();

36:             }

37:             JavaScriptSerializer jsSerializer = new JavaScriptSerializer();

38:             return jsSerializer.Deserialize<T>(responseBody);

39:          }

40:       }

41:       catch (Exception e)

42:       {

43:          // Implement your own error handling (logging etc.)

44:       }

45:       return new T();

46:    }

47: }

Creating the SocialCast class

Next we’ll create a class called SocialCast which will contain one method called GetMostRecentStatusUpdates and another one called PostMessage. Both these methods use the WebRequester class we created above in order to execute the web requests and deserialize the web response.

1: public static class SocialCast

2: {

3:    public static MessageCollection GetMostRecentStatusUpdates()

4:    {

5:       HttpWebResponse httpWebResponse = WebRequester.GetRequestinJson("",

6: "", "demo", "GET");

7:       if (httpWebResponse != null)

8:          return WebRequester.DeserealizeResponse<MessageCollection>(httpWebResponse);

9:       return new MessageCollection();

10:    }


12:    public static bool PostMessage(string message)

13:    {

14:        string postUrl = string.Format("[body]={0}", message);

15:       HttpWebResponse httpWebResponse = WebRequester.GetRequestinJson(postUrl,

16: "", "demo", "POST");

17:       if (httpWebResponse != null && httpWebResponse.StatusCode == HttpStatusCode.Created)

18:          return true;

19:       return false;

20:    }

21: }

Yes, I am aware of the fact that the code above contains the username and password to the demo site, but I don’t consider that a problem as SocialCast themselves have made it public on

So let’s go through the GetMostRecentStatusUpdates method. The URL and the querystring parameters supplied to the WebRequester is explained here: After the WebRequest is executed and the response is returned, the response is deserialized to a MessageCollection object. The MessageCollection class contains the information we need from SocialCast and it has to match the structure of the JSON response. So the MessageCollection class could for example look like this:

1: public class MessageCollection

2: {

3:     public Message[] messages { get; set; }

4: }


6: public class Message

7: {

8:     public User user { get; set; }

9:     public string body { get; set; }

10:    public string created_at { get; set; }

11:    public string likes_count { get; set; }

12:    public string url { get; set; }

13: }


15: public class User

16: {

17:    public string name { get; set; }

18:    public string url { get; set; }

19: }

As you can see the MessageCollection simply contains an array of Message objects, and this structure matches the structure of the JSON response seen here:

Using the SocialCast class

1: MessageCollection messagesCollection = SocialCast.GetMostRecentStatusUpdates();


3: bool success = SocialCast.PostMessage("This is a test message which I'm posting to SocialCast");

That’s it :)

Wednesday, March 21, 2012

Playing around with the EPiServer.Util.SimpleEncryption class

The other day I was playing around with Reflector and I ran into the EPiServer.Util.SimpleEncryption class, which I haven’t really noticed before. The class includes the usual self-explaining cryptography methods like Encrypt and Decrypt, but it also contains some not so self-explaining methods like ClearText and EncryptedText.
So I had to play around with this a little bit, hoping I’d understand why EPiServer has included this class. I browsed for usages of the class and the only usages I could find was in the ExceptionManager and in the EPiServer.Cmo.Cms assembly. I found one comment from 2005 stating that EPiServer.Util originally wasn’t planned on being made public. If this it the case it might explain why there’s so little information about it.

The SimpleEncryption constructor
The SimpleEncryption constructor takes one parameter, an initializer used for generating the cryptography key:

SimpleEncryption simpleEncryption = new SimpleEncryption("myInitializer");

Encrypting text
There are two encryption methods you can use, Encrypt or EncryptedText. They will return the same encrypted string, but the string returned from EncryptedText will be prefixed with ENCRYPTED:

string original = "Testing encryption with SimpleEncryption";

string encrypt = simpleEncryption.Encrypt("myKey", original);
// encrypt == AvVayN0k1jSXjUVHzRmtq9rl9yCtmNLq+sBvz53vr0A6CIbzMaASE2LZ1LHR7hPT

string encryptedText = simpleEncryption.EncryptedText("myKey", original);
// encryptedText == ENCRYPTED:AvVayN0k1jSXjUVHzRmtq9rl9yCtmNLq+sBvz53vr0A6CIbzMaASE2LZ1LHR7hPT
It took me a while to understand why the EncryptedText method is included at all, I found it a bit pointless in the beginning.

Checking if a string is encrypted
Let’s assume you have a piece of text, and you don’t know whether this text is encrypted or not. If you’ve made a habit of using the EncryptedText method, you can use the IsEncrypted method to check if the text is encrypted:

bool isEncrypted = simpleEncryption.IsEncrypted("AvVayN0k1jSXjUVHzRmtq9rl9yCtmNLq+sBvz53vr0A6CIbzMaASE2LZ1LHR7hPT");
// isEncrypted == false

bool isEncryptedText = simpleEncryption.IsEncrypted("ENCRYPTED:AvVayN0k1jSXjUVHzRmtq9rl9yCtmNLq+sBvz53vr0A6CIbzMaASE2LZ1LHR7hPT");
// isEncrypted == true
Here I’m calling the IsEncrypted method with the two encrypted strings from the previous example. Both these texts are encrypted, so you’d assume that both isEncrypted and isEncryptedText would be true. That’s not the case, the IsEncrypted method only checks if the value specified is prefixed with “ENCRYPTED:”. This means that the following would return true even though the text is not encrypted:
bool isEncrypted = simpleEncryption.IsEncrypted("ENCRYPTED:This text is not encrypted");
// isEncrypted == true

Decrypting text
If you encrypted the text using the Encrypt method, you can decrypt the text by using the Decrypt method. If you encrypted the text using the EncryptedText method, you need to use the ClearText method in order to decrypt it.

This class is old, outdated and not safe. It will be phased out, so don't use it. See comments for more information. Conclusion: you've just wasted your time reading this blog post! Sorry...

Tuesday, March 13, 2012

Epinova.CRMFramework 2.0 Alpha version has been released

Finally, Epinova.CRMFramework now supports Microsoft Dynamics CRM 2011! The alpha version has just been released on CodePlex, check it out!

Due to a lot of changes in Microsoft Dynamics CRM 2011, the framework has been completely rewritten with a few consequences:

New requirements:
.NET Framework 4
Microsoft Dynamics CRM 2011

CrmControllerFactory.Instance returns an interface
In previous versions CrmControllerFactory.Instance returned an object of type CrmControllerFactory. This now returns an interface: ICrmControllerFactory.

Permanent removal of CrmQuery class
The CrmQuery system, while functioning in the previous version, was not well enough equipped for complex queries. I’ve removed the CrmQuery class with plans of replacing it with LINQ support in the Beta version.

Temporary removal of CrmEntityController<T>.Find()
As the CrmEntityController<T>.Find() method requires the CrmQuery class, this method has been removed. It will be reinserted in the Beta version of the framework.

Temporary removal of CrmManyToManyRelationshipController<T, V>
As the CrmManyToManyRelationshipController<T, V> class requires the CrmQuery class it has been removed. It will be reinserted in the Beta version of the framework.


Can I upgrade from v1.0 to v2.0-alpha?
If you are integrating with Microsoft Dynamics CRM 2011 and you are not affected by the removals listed above, then yes. I would recommend testing well though as I can not guarantee a bug free alpha version (surprise surprise). Replace the dll’s from v1.0 with the dll’s from v2.0-alpha and you’re good to go.

When will the beta version be released?
I can not give you an accurate answer to this question. The more feedback I get from the alpha version, the faster will the beta version be released!

Do you have any other questions? Feel free to contact me or post a comment :)

Tuesday, February 28, 2012

How to avoid removal of empty attributes for HTML elements in TinyMCE

TinyMCE has a habit of removing empty attributes for many HTML elements, and for various reasons this might not always be what you want.

For instance, today a customer ran into an issue where the value attribute of a <option> element was removed if it was empty. The empty value attribute was needed for custom validation purposes, so this caused quite a headache for them.

Another example which is more relevant for other projects, is the empty alt attribute for the <img> element, which is needed for the HTML to validate.

So how can you force TinyMCE to leave these empty attributes be instead of removing them? You can add your rule to extended_valid_elements, which will then be added to the default TinyMCE rule set.

Here’s the solution for allowing an empty value attribute for the <option> element:

extended_valid_elements: 'option[value=]'

And there’s the solution for allowing an empty alt attribute for the <img> element:

extended_valid_elements: 'img[alt=]'

Notice the equals (=) sign after the attribute name, this is what tells TinyMCE that an empty value is allowed. If you remove the equals (=) sign, for example option[value], TinyMCE would interpret the value attribute as being allowed for the option element, but it would still remove it if it contained an empty value.

I recommend having a look at the default rule set for TinyMCE if you’re not sure how TinyMCE treats the different elements and attributes!

Tuesday, February 21, 2012

Webparts in EPiServer: Step by step

At Epinova we’re huge fans of using web parts in EPiServer as this enables the editor to choose what type of content should be displayed on certain pages while at the same time ensuring the editor doesn’t try to break the design of the website.

This is a step by step guide to getting started with web parts in EPiServer using the AlloyTech demo site as an example.

Our goal is to transform the bottom region of the AlloyTech startpage (the pink rectangle) into a web part zone where the editor can choose between several web parts.


1) Install EPiCode.WebParts.Core and Epinova.WebParts.Providers from the EPiServer Nuget feed.


2) Make sure a reference has been added to the following dlls in your project:


3) In MasterPage.Master, add a ScriptManagerLowUiImpact control and a EPiWebPartManager control after the opening <form> tag:

1: <form runat="server">

2:    <epicode:ScriptManagerLowUiImpact runat="server" EnablePartialRendering="false" />

3:    <epicode:EpiWebPartManager runat="server" Personalization-InitialScope="Shared" Personalization-Enabled="true" />

4:    <!-- Your markup here -->

5: </form>

4) Create a ManagementConsole.ascx:

Note that the ManagementConsole.ascx file should NOT have a code-behind or designer file, instead it should inherit directly from System.Web.UI.UserControl.

1: <%@ Control Language="C#" AutoEventWireup="true" Inherits="System.Web.UI.UserControl" %>


3: <epicode:WebPartManagementConsole ID="WebPartManagementConsole1" runat="server" />

4: <asp:CatalogZone runat="server" ID="ThemeCatalogZone">

5:    <ZoneTemplate>

6:       <asp:DeclarativeCatalogPart ID="DeclarativeCatalogPart1" runat="server">

7:          <WebPartsTemplate>

8:          </WebPartsTemplate>

9:       </asp:DeclarativeCatalogPart>

10:    </ZoneTemplate>

11: </asp:CatalogZone>

What you’ve done here is to declare a CatalogZone called ThemeCatalogZone, and later you will declare the different type of web parts available in this zone by adding them to the WebPartsTemplate.

5) Add the following to the <pages><control> section of your web.config file:

<add tagPrefix="uc" tagName="ManagementConsole" src="~/Templates/AlloyTech/WebParts/Util/ManagementConsole.ascx" />

6) Add the ManagementConsole usercontrol to your MasterPage.Master (directly after the EPiWebPartManager control):

1: <form runat="server">

2:    <epicode:ScriptManagerLowUiImpact runat="server" EnablePartialRendering="false" />

3:    <epicode:EpiWebPartManager runat="server" Personalization-InitialScope="Shared" Personalization-Enabled="true" />

4:    <uc:ManagementConsole runat="server" />

5:    <!-- Your markup here -->

6: </form>

7) Create a banner webpart:

A banner web part will consist of an image, an alternative text and a URL for the image. Create a usercontrol called BannerPart, and add the following code to BannerPart.ascx.cs:

1: public partial class BannerPart : UserControlWebPartBase

2: {

3:    [Personalizable, IsRequired]

4:    public PropertyImageUrl Image { get; set; }


6:    [Personalizable]

7:    public PropertyString ImageAltText { get; set; }


9:    [Personalizable, IsRequired]

10:    public PropertyUrl ImageLink { get; set; }


12:    public BannerPart()

13:    {

14:       Image = new PropertyImageUrl();

15:       ImageAltText = new PropertyString();

16:       ImageLink = new PropertyUrl();

17:    }


19:    protected override void OnPreRender(EventArgs e)

20:    {

21:       uxImage.ImageUrl = Image.Value != null ? (string)Image.Value : string.Empty;

22:       uxImage.AlternateText = ImageAltText.Value != null ? (string)ImageAltText.Value : string.Empty;

23:       uxLink.NavigateUrl = ImageLink.Value != null ? (string) ImageLink.Value : string.Empty;

24:    }

25: }

Here we have declared the three properties mentioned above: Image, ImageAltText and ImageLink. All the properties are Personalizable, but only Image and ImageLink are required.

Add the following markup to BannerPart.ascx:

1: <%@ Control Language="C#" AutoEventWireup="true" CodeBehind="BannerPart.ascx.cs"

2: Inherits="EPiServer.Templates.AlloyTech.WebParts.BannerPart" %>


4: <asp:HyperLink ID="uxLink" runat="server">

5: <asp:Image ID="uxImage" runat="server" /></asp:HyperLink>

You can add translations to your Banner web part by creating a lang file called WebParts.xml:

1: <?xml version="1.0" encoding="utf-8" ?>

2:    <languages>

3:       <language name="English" id="en">

4:          <webparts>

5:             <webpart name="BannerPart">

6:                <caption>Banner</caption>

7:                <description>Image banner with link and alternative text</description>

8:             </webpart>

9:             <common>

10:               <property name="Image">

11:                 <caption>Image</caption>

12:                 <help></help>

13:               </property>

14:               <property name="ImageAltText">

15:                 <caption>Alternative Text</caption>

16:                 <help></help>

17:               </property>

18:               <property name="ImageLink">

19:                 <caption>Image Link</caption>

20:                 <help></help>

21:               </property>

22:             </common>

23:          </webparts>

24:      </language>

25: </languages>

7) Add BannerPart to the <WebPartsTemplate> in ManagementConsole.ascx:

1: <%@ Control Language="C#" AutoEventWireup="true" Inherits="System.Web.UI.UserControl" %>

2: <%@ Register TagPrefix="uc" tagName="BannerPart" src="../BannerPart.ascx" %>


4: <epicode:WebPartManagementConsole ID="WebPartManagementConsole1" runat="server" />

5: <asp:CatalogZone runat="server" ID="ThemeCatalogZone">

6:    <ZoneTemplate>

7:       <asp:DeclarativeCatalogPart ID="DeclarativeCatalogPart1" runat="server">

8:           <WebPartsTemplate>

9:              <uc:BannerPart ID="wpBanner" runat="server" />

10:          </WebPartsTemplate>

11:      </asp:DeclarativeCatalogPart>

12:    </ZoneTemplate>

13: </asp:CatalogZone>

8) Add a webpart zone in your Default.aspx file:

<epicode:ZoneLowUiImpact runat="server" ID="MainContentAreaZone" catalogzoneid="ThemeCatalogZone" layoutorientation="Horizontal" />

Note that the CatalogZoneId is equal to the ID of the CatalogZone in step 7, which means that the editor will be able to create a Banner webpart on the startpage.

9) Create a banner webpart on the startpage of your website:

In order to do this you need to be logged in as an administrator. If you right-click your startpage in view mode, you will see a new option in the context menu called “Edit Web Parts”. Clicking this option will show you an overview of the web part zones on the page:


You will also see a dropdown list containing all the available web parts for this web part zone, in this case only Banner. Click “Add” and you will be able to create a Banner web part:


Fill in all the properties and clicking OK will give you a preview of the web part on the page. If you want to publish your new web part, right-click the page and chose “Save and publish”.

10) Create your own web parts:

If you want to create more webparts you need to create a usercontrol for your webpart inheriting from UserControlWebPartBase as explained in step 7. You will also need to add it to the <WebPartsTemplate> of your ManagementConsole.ascx file in order for the administrator to access it in the web part zone.

Saturday, February 11, 2012

Multitasking Saturday with a technical review and Epinova.CRMFramework


What you see here (apart from the game and my dog) is my new project. I’ve been asked to do a technical review of an upcoming book. I didn’t hesitate for a second when I got the offer as I’m extremely curious to see how the process of publishing a technical book works.

For the next couple of months, I’ll receive a new chapter and a due date weekly. What I have to do is read the chapter and comment on things like: Is the structure of the chapter logical? Are the code examples good enough? Is something too detailed or missing? Will the reader be able to reach the objectives of the chapter after reading it and doing the exercises? Quite a challenge, but so far I love it!

Needless to say, I wasn’t able to concentrate much on the technical review at the end of today’s game. But now it’s time to get back to work and finish the review before I move on to finishing off the upgrade of my CRM Framework.

Friday, February 10, 2012

Files zipped on Mac gave System.UnauthorizedAccessException after transfering to Windows server

I ran into a quite frustrating problem during a deployment of an EPiServer site yesterday, all files in the VPP folder gave me a HTTP 500 error:

“System.UnauthorizedAccessException: Access to the path ‘…’ is denied.”

We’ve all seen this error message before, and the first thing to do is check the access rights on the VPP folders on disk. The access rights were correct, so I went on to checking the VPP and IIS configuration but found nothing wrong.

After double checking configuration, asking everyone available for possible solutions, eating all the cheese in my fridge, checking the configuration yet again, I realized I was clueless. Totally clueless, with Google laughing at me and a customer becoming more and more anxious. At last, however, I found the problem in the most random manner possible.

This is what we did earlier that night:

1) VPP folders were zipped on a Mac
2) The zipped file was transferred to a Windows 2008 server
3) The files were unzipped using the built-in “Extract all” function in Windows.

From what I’ve been able to find out, these steps are what caused the problems. The files zipped on a Mac and extracted using the built-in “Extract all” function in Windows were automatically encrypted!

This does not seem to be an issue if you use WinRAR or 7-Zip, but I haven’t been able to find a more detailed description of why. The closest I’ve come to finding some more information is from this forum thread.

How to decrypt the files:

1) Right-click the VPP folder and choose “Properties”
2) In the General tab, click “Advanced”
3) Uncheck the “Encrypt contents to secure data” checkbox

So it all ended well, but I’d still like to know exactly why Windows encrypted the files during the extraction. If you have the answer, please leave a comment so this can stop annoying me :)

Last, but not least: Thanks to everyone who tried to help!

Tuesday, January 3, 2012

“Let more people experience that boobs and coding ability are not mutually exclusive” – My Response to “The Token Female”

Sitting down with my morning coffee, I just came across a retweet by one of my followers on Twitter. The tweet referred to “The token female” by Christin Gorman. 

Being a female developer I never seize to be amazed by all the female developer vs. male developer discussions out there, but so far I have been able to stay on the sideline. This blog post, however, triggered something in me (I think it was the sentence “Let more people experience that boobs and coding ability are not mutually exclusive”) and I immediately felt the need to write a response.

I do not agree with all of Christin’s views, particularly her view of placing computer science students in classes based on skills. My main inspiration in life has always been talented piers, and had I not been allowed into the same class as my university friends who have programmed since their early school days, I’m afraid I would have lost a lot of motivation.

While not agreeing with her view on this, I do see her point. Like her, I had never programmed before I started university and of course the guys in my class never asked me a question they needed the answer to. Why should they? They had 3-5 years more experience than me, and all I did was to ask “stupid” questions. My belief is that there are no stupid questions, the only stupid thing you can do is not to ask. So after a year of “stupid” questions I slowly saw things changing. My male friends suddenly started asking me questions, they began to discuss the assignments with me, asking me for help. Their patience and helpfulness had helped me reach their level. BUT, if I had been put in a class containing students with the same (lack of) skills as me, I suspect it would have taken 3-5 years for me to reach this level instead of 1-2 years.

Christin Gorman describes a couple of scenarios in which she has directly been doubted as a female developer from a technical perspective, but I have been lucky enough not to experience this (so far). I know it will happen at some point, and it will probably happen more than once, but I am prepared for that. I have, on the other hand, received quite a lot of inappropriate comments from men I have met in various work related situations. These comments have spanned from “The only reason you got this job is because you’re female” to comments bordering to sexual harassment. As these comments all have come from men who have not actually seen my work, I have been able to shrug them off without much thought. If I have thought anything at all it’s been “Just wait, I’ll show you how good I can be”.

I will always continue to ask “stupid” questions, and my fellow male developers who know me well also know that the reason I do this is so I can become better. Some of them might even stop to think that having to answer all these questions will actually make them better at the same time. But the one thing I’m most grateful for above all, is that my colleagues at Epinova have never doubted my skills as a female developer. They are a living proof of male developers knowing that “boobs and coding ability are not mutually exclusive”.

Most female developers I know have had similar experiences to the ones described by Christin Gorman or myself. While these experiences can be aggravating or hurtful, it has to be said: There is no better feeling than proving you can kick ass AND be a female developer at the same time!

So thank you to Chrisin Gorman for a great blog post. Now, male or female, let us kick some ass!