Better looking URLs in ASP.NET MVC 3

If you are like me and thought the conventions for action/controller names in ASP.NET MVC 3 were not great (capitalizations are evil), here’s a small trick to make them look nicer.

Let’s say we have a controller name that consists of more than two words, i.e. FooBarController and it has an action called “FooBaz“. Normally ASP.NET MVC will map the url as "~/FooBar/FooBaz". I don’t like this naming convention very much and would rather prefer "~/foo-bar/foo-baz". So how do we go on achieving this?

It seems that the normal routing options do not support this kind of behavior. You can add a custom route for every controller with a long name and use the [ActionName] attribute on actions. However I wanted to see if a more uniform solution was possible to achieve.

The first way that I investigated was writing my own IRouteHandler which allows you to change the default routing as you see fit. This requires you to write your own IHttpHandler, which is the main request processing pipe in ASP.NET MVC 3. Looking at the source of ASP.NET MVC 3 System.Web.Mvc.MvcHandler, this seemed doable by copy pasting and changing the way it resolves the controller name. This felt wrong however, as making a copy of the whole request processing pipe for a cosmetic task seemed like an overkill.

There seems to be a much easier way to achieve this, by using the ControllerFactory injection that is possible in ASP.NET MVC 3. As you might know, it is possible to override the behavior of ASP.NET MVC 3 for initializing controllers. As such, you can create your own DelimiterControllerFactory implementation.

public class DelimiterControllerFactory : DefaultControllerFactory
        public override IController CreateController(RequestContext requestContext,
            string controllerName)
            requestContext.RouteData.Values["action"] =
                requestContext.RouteData.Values["action"].ToString().Replace("-", "");
            return base.CreateController(requestContext, controllerName.Replace("-", ""));

You can then plug this controller in during your Application_Start() in Global.asax.

protected void Application_Start()
    ControllerBuilder.Current.SetControllerFactory(new DelimiterControllerFactory());

Now when you go to /foo-bar/foo-baz it will resolve to FooBarController.FooBaz(). Keep in mind you need to rename your View directories to foo-bar instead of FooBar. It might be possible to have the old behavior by overriding the View() method on the controller, however this is just about how you name your folders internally in your project.


Command and Query Handlers

In my previous post, I talked about some Command and Query handlers. The implementation I made is in .NET using C#. Here is what a command handler looks like:

public interface ICommandHandler<in TCommand> where TCommand : ICommand
     void Handle(TCommand command);

public class DeleteUserCommand : ICommand
    public Guid Id { get; set; }

public class DeleteUserCommandHandler : ICommandHandler<DeleteUserCommand>
    private readonly ISession _session;

    public DeleteUserCommandHandler(ISession session)
        _session = session;

    public void Handle(DeleteUserCommand command)
        //handle command  

There is also a command dispatcher that will dispatch the necessary handlers to execute the commands. This one does it synchronously, but it can be turned async quite easily.

public class DefaultCommandBus : ICommandBus
   public void SubmitCommand<TCommand>(TCommand command) where TCommand : ICommand
      var handler = _commandHandlerFactory.GetHandler<TCommand>();

CommandHandlerFactory simply resolves the required handler using a DI container and is also reponsible for releasing the handler.

On the query side, there is a QueryService that executes the query and returns the result. This makes some use of the dynamic keyword trickery to provide a strongly typed API.

public interface IQuery<TResult> : IQueryBase {}

public interface IQueryHandler<in TQuery, out TResult> : IQueryHandler where TQuery : IQuery<TResult>
   TResult Execute(TQuery query);

public class QueryService
  public TResult ExecuteQuery<TResult>(IQuery<TResult> query)
     var handlerType = typeof (IQueryHandler<,>).MakeGenericType(query.GetType(), typeof (TResult));
     var handler = _container.Resolve(handlerType);
        return (TResult)((dynamic)handler).Execute( (dynamic)query);

You can simply use the query as follows, and it will return a strongly typed result.

var query = new SearchUsersQuery { SearchTerm = "term" };
var result = _queryService.ExecuteQuery(query);

Command and Query Seperation

Command and Query Responsibility Separation, or CQRS, has been a term popping up quite often recently. It is definitely not the solution to everything, but it certainly makes some kind of solutions better. There are some frameworks to make the implementation easier, such as NCQRS and it certainly is very interesting, but it’s still very much in infancy.

If you would go with the full CQRS + Event Sourcing solution, depending on the experience and skill level of the developers the project might end up being much more costly than expected, even if CQRS + Event Sourcing offers superior architecture in many cases and can be applied to many problems, it doesn’t mean it’s always the correct answer.

I recently did an implementation of a CQRS-inspired solution, without using the full blown route using event sourcing. The requirements were as follows:

  • All the queries need to be filtered according to user’s privilege level.
  • All the commands need to pass through an authorization layer.
  • All changes to system need to be logged, at the command level.
  • It should be possible to expose all the functionality of the system through simple APIs.

Applying the CQRS principle in this case makes it easier write Query and Command Handlers that are able to deal with these kind of requirements. There are two sets of objects, Commands and Queries, each with corresponding handlers. QueryHandlers work synchronously and respond to Query objects. CommandHandlers as per CQRS do not return any results, and can work either synchronously or asynchronously, and process Command objects. QueryHandlers are not allowed commit any transactions (though this is not enforced by the code at this point). QueryHandlers work with a filtering layer that filter any results returned. CommandHandlers work with an authorization layer that authorizes every command in terms of a user context. Commands succeed as a whole, or fail as a whole. Both types of handlers work against the same data store. The query and command layers are the only possible interaction with the underlying data store, the only notable exception being the “authentication” layer. Performance isn’t an issue at this point so there hasn’t been any need to go with a different “Read Model” which offers a denormalized view.

When exposing APIs, it is quite easy to expose all the Query and Commands in a straightforward way and have all the logic for filtering and authorization still intact.

Of course all of this can be achieved by simpling turning everything into Requests and writing Requests Handlers or doing standard CRUD, but having the seperation makes it possible to make all commands function asynchronously or add some kind of caching layer for the queries later on.

Hosting Mercurial On IIS7

For those wondering how to host a mercurial server in a windows corporate environment, it is possible to host mercurial 1.7.2 on IIS7 using CGI, and surprisingly smooth as well. It’s been running flawlessly in a corporate production environment for a while now.

I followed the steps which were posted on Stack Overflow and the steps from Jeremy Skinner’s blog. Be aware that Jeremy Skinner’s steps include hgwebdir.cgi, but this is from an older version of mercurial, so you should be using hgweb.cgi instead.

There are some points that need special consideration:

Local vs Global Configuration

There are two scopes of configuration for mercurial when running it through hgweb. The “global” configuration is controlled by the config file that is fed into hgweb.cgi. I will call this file hgweb.config. This holds all the default settings for any repository that is served through hgweb.cgi. Note that these settings will be different that the global mercurial settings on the server machine. They are only applied when mercurial is running through hgweb.cgi.

The other scope is the per repository configuration. These are controlled by hgrc files that reside within the .hg folder of each repository.

Multiple Repositories

The way hgweb maps repositories to urls is configured within the hgweb.config. Within hgweb.config, you can create a section called [paths] that does the mapping. There also needs to be a [web] section containing the baseurl for the server (see hgweb documentation for more information about this). It was a bit tricky to get the paths working properly as there seems to be conflicting information about this on various sources. What worked for me was this:

baseurl = http://<server>/hg/
style = monoblue

/ = c:\hg\**

What this does is maps all repositories under c:\hg to http://server/hg/repo_name. Quite neat.


If you want push/pull authentication around mercurial when running it under IIS, then you need to enable Basic Authentication and Windows Authentication for the mercurial application that you created when following the steps earlier.

Another important point is that the “Impersonate” under CGI settings for the application that you created should be set to False as you want Mercurial itself to handle authorization for push or pull, you don’t want the actual cgi application running as that user.

Within each repository, you can create a list of users who allowed to push under the [web] item using allow_push=. See mercurial documentation for more info about this.

Maximum bundle size

You might need to change the maximum post size on IIS when pushing large (e.g. initial commit for a large codebase) bundles to the server. Otherwise IIS gives a strange 404 not found error.

The default maximum size is 30mb, and you need to increase that by editing the web.config file for the mercurial application, in the section system.webServer.

  <requestlimits maxAllowedContentLength="2000000000" />


It is also possible set mercurial to notify users about new pushes to repositories. Everytime something is pushed to the central server it can send out a mail with a summary of the commits within the bundle.

To set this up, you need to enable the notify extension in hgweb.config and configure smtp, as well as hooks.

These are the elements in the hgweb.config :

notify = 
host = smtp-hostname
from =
#multiple sources can be specified as a whitespace separated list
sources = serve push pull bundle
strip = 2
config = 
test = False

After this, within each repository, you need to edit the hgrc file for that repository and add:

# key is glob pattern, value is comma-separated list of subscriber emails
* =,

You can look at the Notify Extension for Mercurial documentation for more information.


It is possible to setup SSL rather trivially, using standard IIS7 procedure, which I won’t go into detail. If you are serving both HTTP and HTTPS, within your hgweb.config, you can set

push_ssl = True

to enforce push only over SSL.

Switching to a DVCS in a corporate environment

After using Mercurial as a source control system as a replacement to SVN, I will just say that, yes it is that good as the hype would like you to believe and it contributes greatly to any development process in a company. I will talk more in another article why we picked Mercurial over the other DVCS alternatives. In this article I will focus more on the general DVCS workflows and benefits.

Although DVCS (Distributed version control system) conceptually has been around for a while, lately there have been a lot of hype around it with the explosion of GitHub and similar sites like BitBucket. In my corporate environment, we’ve been using SVN for a while, even though we did run into issues once in a while, it was working at a more or less acceptable level and we were rather content with it. This was of course until I happened to listen to a seminar on Git and became very curious to learn more about it. After having dug deeper into DVCS I can say that the reason why I have been content with SVN was that I didn’t know what a source control system could be capable of.

The DVCS concept was a bit difficult to grab at first, but as I dug deeper into it, I can easily say that it was like a paradigm shift of sorts and would be difficult to “unlearn” it. There are already numerous very good articles and tutorials about major DVCS systems on the web so I won’t go into any detail there about how they conceptually work. What I want to talk about more is how a DVCS could fit in a corporate environment.

I’ve read comments here and there that source control systems like Mercurial and Git are more appropriate for open source development where contributors to a project can be distributed across the globe and has usually a more relaxed pace of development. I actually see DVCS as being able to cope with any kind of development workflow, while a centralized VCS forces you to work within certain boundaries.

We did the switch to Mercurial basically by doing a fresh start, and creating the initial branches that were still active on the old SVN repository. We didn’t believe the SVN history was particularly valuable, and if it would be useful at some point we could always look it up. This also gave a good opportunity to trim the fat from the source base.

I see the biggest value in a DVCS with the way it manages changes and the ease of use in merging different branches together. It is so easy to create and merge branches that we ended up creating a branch for every major or minor feature we were going to add. This makes it very flexible with regards to when you want to merge your features and release them. It simply feels that SVN did not offer this kind of power, even though it supports branching and merging, it feels very cumbersome and heavy in comparison.

We found a good development workflow with having a main “integration” branch, which is where automated tests are run and automated builds are made. All feature branches are created out from that branch and later on merged back into that branch when they are completed. Before each release we also create a release-* branch. This is the branch that will be QA’d and released. All fixes and improvements are made against the release branch, and the release branch is merged back periodically into the default branch. The automated tests and builds are also made against the release branch. There is a central repository which all developers work against and where the automated build system is run against. I believe this workflow is flexible enough that it’s possible to run multiple threads of development (i.e. feature branches) simultaneously without affecting the work of other features.

Most of these workflows apply to any kind of DVCS system, be it Mercurial, Git or Bazaar, which are more or less all interchangable in my opinion with some differences in implementation details.

Thecus N8800SAS – An affordable storage solution

In my previous post I was talking about different products for building a virtual lab environment for development and testing. In such an environment with multiple hosts running hypervisors, a good performance shared storage is very important. While looking for different solutions, you quickly realize that storage is indeed expensive, and the prices for SAS drives are much more than the cheap 1TB drives you can buy for home use. Not to mention the prices for the storage servers. I don’t know if the price increase is justified, maybe it is just less competition in the market and the companies know that people will pay up?

After looking at a few different solutions, a product by Thecus, a company I’ve never heard of previously, called N8800SAS caught my eye. It was much cheaper (more than 1/3 of the price!) than comparable products from say, HP or Dell and seemed to offer quite similar features.

I am by no means an expert on file storage hardware, but I would say the hardware definitely looked and felt solid. It has a web interface where you set up RAID, configure iSCSI, SMB, NFS etc as expected. Everything in the web interface feels a bit home made however but gets the job done. It was no problem at all to set a RAID 10 configuration using 6 15k 600GB SAS drives.

Connecting it to an VMware ESX server using iSCSI worked with no issues, however I can’t say the same about NFS. After some hours of frustration, it turned out that the problem was somehow caused by the lack of proper DNS entries on the Thecus server. You need to add a hostname for the VMKernel IP (which is the IP used by VMware ESX for NFS and iSCSI, and is a different IP than the management IP) for the ESX host in the file /etc/hosts on the Thecus server. I assume that this problem would not have occured if the DNS server had entries for the VMkernel IP.

To be able to edit the file you need to enable SSH access into the server first. Thecus does not offer any SSH connectivity by default, even after upgrading the firmware to the latest version. However, you can install modules on the server, and there are Sysuser and SSHD modules which are actually made for the N5200 product, but worked just as fine with the N8800. After installing and activating the modules through the web interface, you can simply SSH into the server and use the “sys/sys” credentials to login with root privileges.

I have not been able to yet look at what kind of backup functionality it offers, but from the surface it seems to offer both Nsync and Backup to USB / eSATA (the hardware has an eSATA port).

Thecus are offering a very decent product for a very reasonable price. If you don’t need all the bells and whistles of servers from Dell or HP which are about 3 times as expensive and just want pure high performance storage, then I would definitely recommend the N8800SAS.

Virtual Lab Environments for Development and Testing

Lately, I’ve been evaluating different products for a virtual lab environment to be used for software testing. A virtual lab is a huge convenience and time saver when it comes to rapidly provisioning environments for the software QA process. They are, of course, in essence just one of the many tools available to ensure that a software is released without any defects.

A virtual lab environment allows users to create different templates of virtual machines and reuse those templates to quickly start up disposable test environments with a particular configuration. The user can start groups of machines, with each machine dedicated to a purpose and all of them running together in a virtual network. The lab environment products also allow the user to take a snapshot of the machine at any point in time and revert back to that snapshot later on.

Having a central lab environment increases the collaboration possibilities between different users, as a configuration created by one user can be shared across multiple users. A tester can reproduce a defect in a virtual environment, and share it with a developer later for debugging.

Most of the virtual lab management products available seem to be oriented towards medium to large enterprises, however I think even really small companies have a lot to gain from automating their lab environment.

I evaluated the following products:

VMware vCenter Lab Manager

This is VMware’s own product that is an addition to their vSphere concept. It provides pretty much all the functionality you need from a virtual lab management tool. There is a web based interface and you can directly view the console output of each machine through the web pages. It is easy to setup templates, networking, take snapshots and so on. It supports LDAP synchronization for user and group management as well. vCenter Lab Manager is only compatible with the VMware vSphere hypervisors, such as ESX or ESXi.

The problem with this product is that it seems to be more geared toward larger organizations as the pricing makes it unfeasible to run it at a smaller scale. You will need at least one vSphere vCenter Standard Server, which is by itself $5000. Even though the vCenter Server supports an unlimited number of hosts, if you are going to use a small number of hosts it adds a lot to the total licensing cost. In addition to that you need a Lab Manager license for each socket on the host machines, which is $1500. The total licensing cost becomes very steep for a company that will utilize two or three hosts running hypervisors. VMware themselves offer a cheap hypervisor packages that can be used by small organizations, such as vSphere Essentials, but it is not compatible with their Lab Manager product.

VMLogix LabManager

This is a similar product to VMware Lab Manager, with some pretty major differences. It offers similar functionality to the vCenter Lab Manager, but it is not bound to a specific hypervisor. As such, it is possible to run it with XEN, Hyper-V or ESX hypervisors. This gives a big flexibility when it comes to deployment, as it does not require the costly vCenter Server Standard edition even if you want it to use with ESX hosts. Thus, it is possible to use VMLogix LabManager even if you are only using the vSphere Essentials Bundle. This is perfect for organizations who do not plan to run more than three hosts, which is the limit for the Essentials Bundle.

The management is web based, and in my experience the web interface seemed much faster and smoother in comparison to VMware’s product (note: The web pages seems to be rendered by Python). Similar to VMware, it is possible to view the console output of the machines directly from the web pages or through VNC.

The licensing cost is $2200 / socket, however VMLogix offers to give licenses for free to smaller organizations and require that you only pay for the support.

As a whole, this product seemed to be better thought out and more mature in comparison to the Lab Manager from VMware.


SkyTap use a very different model compared to the other two mentioned above, so it may or not may be comparable product. It is a 100% hosted solution, similar to Amazon EC2 but with more control over the machine and lots of templates created already for you, such as all standard versions of Windows from Microsoft. The pricing model is that you pay per hour of usage of a virtual machine, again very similar to Amazon EC2.

As it is a hosted solution, you do not need to make any investment in the hardware and scale up or scale down your deployment as you go. Having a lot of the operating system images already built is also a big time saver, as in the other solutions you need to build the library of images yourself.

Although the concept is very nice, they only have servers in USA and being in Sweden, the latency was not that great. That made the whole experience feel unresponsive. Working an extended amount of time with that latency would be a big inconvenience to everyone and decrease efficiency over all.

However I can definitely see the potential of their offering as they solve these kinds of issues. The product also has a lot of potential as a training and product demonstration tool for sales people or customers, as the virtual machines can easily be accessed from anywhere in the world.

To wrap things up, I feel that VMLogix are offering the best value when it comes to building a lab environment in a small scale. SkyTap offer a really good alternative for companies who have good latency to their servers and are not willing to make a big investment up front and do not want to deal with configuration and installation of all the servers.