Integrating ASP.NET Web API with Castle Windsor

Currently ASP.NET Web API seems like the tool of choice for building RESTful APIs using the .NET framework . Overall it seems a lot better thought out than “vanilla” ASP.NET MVC with easier and more extensibility points. The fact that it is open source is awesome too. It is not perfect, but miles better than WCF for building any kind of lightweight web service at the moment. One of my favourite features are Delegating Handlers (which I might cover in a future article..) which in particular are very powerful.

For most real world applications that will be built using Web API, using Dependency Injection will generally be a good idea. I’ve always been a big Castle Windsor fan, and even though there are plenty of great IoC containers available, Castle has always been the old faithful for me, and I have yet to encounter a DI problem that I couldn’t solve using Castle.

There are a few different ways to integrate a container with Web API, however since Castle uses the Resolve-Release pattern, I was happiest with using the Dependency Scopes built in to WebAPI as it supports releasing dependencies in a graceful way and it works under both IIS and self-hosting.

The key interfaces are IDependencyResolver and IDependencyScope which serve as the main integration points into the Web API. Keep in mind the IDependencyResolver in Web API is not the same as the one in MVC, even though they share the same interface name. The namespaces are different, which can be a little confusing as a Web API application is also a MVC application at the same time. This seems to be a common theme throughout Web API as several key interfaces will have a Web API version and MVC version and things might get confusing sometimes.

The IDependencyResolver interface is used to resolve everything outside a request scope. This means all the infrastructural interfaces of WebApi (for example, IHttpControllerActivator) are resolved using the IDependencyResolver. If the resolver returns null, then the default implementation is used. The IDependencyResolver is never disposed by the framework, and it’s only ever used to resolve singletons so the Resolve / Release pattern does not apply to the IDependencyResolver itself.  A typical IDependencyResolver implementation using Windsor might look this:

internal class WindsorDependencyResolver : IDependencyResolver
{
    private readonly IKernel container;

    public WindsorDependencyResolver(IKernel container)
    {
        this.container = container;
    }

    public IDependencyScope BeginScope()
    {
        return new WindsorDependencyScope(this.container);
    }

    public object GetService(Type serviceType)
    {
        return this.container.HasComponent(serviceType) ? this.container.Resolve(serviceType) : null;
    }

    public IEnumerable<object> GetServices(Type serviceType)
    {
        return this.container.ResolveAll(serviceType).Cast<object>();
    }

    public void Dispose() {}
}

Quite straightforward, except that it’s important that the GetService method should return null if the component was not available, this will ensure that the framework default implementation will be used instead. This is likely the behaviour you want most of the time.

During the life-cycle of a request, a dependency scope (implemented by IDependencyScope) is created on the request using the BeginScope method on the IDependencyResolver. At the end of the request, the dependency scope is disposed. This allow us to implement the Resolve / Release pattern using Castle Windsor.

public class WindsorDependencyScope : IDependencyScope
{
    private readonly IKernel container;
    private readonly IDisposable scope;

    public WindsorDependencyScope(IKernel container)
    {
        this.container = container;
        this.scope = container.BeginScope();
    }

    public object GetService(Type serviceType)
    {
        return this.container.HasComponent(serviceType) ? this.container.Resolve(serviceType) : null;
    }

    public IEnumerable<object> GetServices(Type serviceType)
    {
        return this.container.ResolveAll(serviceType).Cast<object>();
    }

    public void Dispose()
    {
        this.scope.Dispose();
    }
}

The Scoped Lifestyle is a new lifestyle in Castle.Windsor 3 that makes it possible to create an arbitrary scope bounded by the object returned from the Container.BeginScope call. When this object is disposed, the scope is ended and Castle will then release all the objects with Scoped lifestyle that have been resolved in the same call stack after the scope is generated.

As the top level object within the request scope will be the Controller, the Controllers need to be registered with a Scoped lifestyle to make sure that they will get released at the end of the request when the dependency scope ends.

public class WindsorWebApiInstaller : IWindsorInstaller
{
    public void Install(IWindsorContainer container, IConfigurationStore store)
    {
        container.Register(AllTypes.FromThisAssembly().BasedOn<ApiController>().LifestyleScoped());
    }
}

Finally, we need to replace the default dependency resolver with the Windsor implementation in global.asax and install our dependencies:

protected void Application_Start()
{
    ...
    var container = new WindsorContainer();
    container.Install(FromAssembly.This());
    GlobalConfiguration.Configuration.DependencyResolver = new WindsorDependencyResolver(container.Kernel);
    ...
}

A fully working sample application is also available on GitHub.

Syntax highlighting for script tags with html templates in Visual Studio 2010 MVC3 applications

With the prominence of frameworks like Knockout and Backbone, it is now quite common to use client side templates when working on client-side heavy web applications.

A convenient way to create client side templates has been to use the script tags like this:

<script id="templateId" type="text/html">
 <div>template contents</div>
</script>

However, VS2010 does not do syntax highlighting on script tags, which ends up looking ugly like this:


This has been a little annoyance to me and as a cure, I created a simple helper method influced by the Html.BeginForm() helper:

public static class HtmlHelperExtensions
{
    public static ScriptTag BeginHtmlTemplate(this HtmlHelper helper, string id)
    {
        return new ScriptTag(helper, "text/html", id);
    }
}

public class ScriptTag : IDisposable
{
    private readonly TextWriter writer;

    private readonly TagBuilder builder;

    public ScriptTag(HtmlHelper helper, string type, string id)
    {
        this.writer = helper.ViewContext.Writer;
        this.builder = new TagBuilder("script");
        this.builder.MergeAttribute("type", type);
        this.builder.MergeAttribute("id", id);
        writer.WriteLine(this.builder.ToString(TagRenderMode.StartTag));
    }

    public void Dispose()
    {
        writer.WriteLine(this.builder.ToString(TagRenderMode.EndTag));
    }
}

The helper can be used as follows:

@using (this.Html.BeginHtmlTemplate("person-template"))
{
   <h3></h3>
   <p>Credits: <span></span></p>
}

Here is the result, syntax highlighted:

Previous and next weekday in Ruby and RSpec

Lately I’ve been doing some Ruby coding as well as some Ruby on Rails, and enjoying it a lot. It is definitely very different than doing C# and ASP.NET MVC 3, in that the syntax is a lot more concise. You end up typing a lot less code for doing the same thing. Although I am still not sure if your productivity is actually higher or not. When it comes to productivity, there are too many factors to consider to make an objective analysis. I also have no experience working in a large scale project with Ruby, where I imagine there would be many other factors in play as well.

I needed to write some additional utility methods to the Date class in Ruby, to find the next and previous weekdays excluding weekends. In Ruby this possible to do this just by “amending” the class definition (in C# world they would have been called extension methods). It’s possible to open up any class and add methods to it. I still feel pretty much a Ruby novice though, and unlike the C# world there usually seems to be several ways to do the same thing in Ruby. This can be overwhelming at first, but I think the more you learn the more you get comfortable with it.

It was also a good exercise to get some practice with RSpec, which is a testing framework for Ruby that allows you to do Behavior-Driven Development. BDD is a form of Test-Driven Development but the focus is on making the tests reflect the behaviour of the software as well as making tests readable almost in a natural language. Ruby is a good fit for BDD due to the clarity of the syntax.

As always with BDD, you write the tests first. I won’t go over the whole development process and instead just put the resulting test suite here:

require "rspec"
require 'date_ext'

describe "Date extensions" do
  before :all do
    @thursday, @friday, @saturday, @sunday, @monday =
        (17..21).map { |n| Date.new(2011,11,n) }
  end

  context "prev_weekday" do
    it "should find thursday before friday" do
      @friday.prev_weekday.should eq @thursday
    end
    it "should find friday before sunday" do
      @monday.prev_weekday.should eq @friday
    end
  end
  context "next_weekday" do
    it "should find friday after thursday" do
      @thursday.next_weekday.should eq @friday
    end
    it "should find monday after friday" do
      @friday.next_weekday.should eq @monday
    end
  end
end

You can see how the tests almost read like plain English. Having code that is easy to read without clutter is one of the principal pillars of the Ruby language.

Here is the implementation that makes all the tests pass:

require 'date'

class Date
  def weekend?
    saturday? || sunday?
  end

  def prev_weekday
    previous = yesterday
    while previous.weekend?
      previous = previous.yesterday
    end
    previous
  end

  def next_weekday
    nextday = tomorrow
    while nextday.weekend?
      nextday = nextday.tomorrow
    end
    nextday
  end
end

Reflection.InvokeMember vs Dynamic vs Static Binding Performance

In yesterday’s post, Visitor Pattern Revisited with C# 4, I talked about a new way to implement the visitor pattern with C# 4. In my article, I omitted one way to implement the pattern in pre- C# 4.0 versions and that was reflection.

You can implement the Serialize method with reflection as follows:

public void SerializeWithReflection(Product p)
{
    typeof (ProductSerializer).InvokeMember("Serialize", BindingFlags.Default | BindingFlags.InvokeMethod, 
        null, this, new [] {p});
}

You can see the drawback with this that the name of the method is not strongly typed, which makes it prone to refactoring errors. If you are using this approach, make sure that your unit tests are covering it.

I decided to a benchmark of all three methods, just to see what kind of performance you get with dynamic binding vs the other available methods.

Here is the full code:

    public abstract class Product { }
    public class Book : Product { }
    public class Record : Product { }
    public class Movie : Product { }

    public class ProductSerializer
    {
        public void Serialize(Book b) { }
        public void Serialize(Record b) { }
        public void Serialize(Movie b) { }

        public void SerializeWithDynamic(Product p)
        {
            Serialize((dynamic)p);
        }


        public void SerializeWithReflection(Product p)
        {
            typeof(ProductSerializer).InvokeMember("Serialize", BindingFlags.Default | BindingFlags.InvokeMethod,
                null, this, new[] { p });
        }

        public void SerializeWithStaticBinding(Product p)
        {
            if (p is Book) Serialize(p as Book);
            if (p is Record) Serialize(p as Record);
            if (p is Movie) Serialize(p as Movie);
        }
    }

    class Program
    {
        const int NumRepetitions = 10000000;
        static TimeSpan Benchmark(Action action)
        {
            var stopwatch = Stopwatch.StartNew();
            for (int i = 0; i < NumRepetitions; i++)
                action.Invoke();
            return stopwatch.Elapsed;
        }

        static void Main(string[] args)
        {
            Product b = new Movie();
            var serializer = new ProductSerializer();
            Console.WriteLine("SerializeWithReflection: " + Benchmark(() => serializer.SerializeWithReflection(b)));
            Console.WriteLine("SerializeWithDynamic: " + Benchmark(() => serializer.SerializeWithDynamic(b)));
            Console.WriteLine("SerializeWithStaticBinding: " + Benchmark(() => serializer.SerializeWithStaticBinding(b)));
        }
    }

And this is the output:

SerializeWithReflection:    00:00:40.9627807
SerializeWithDynamic:       00:00:00.5167343
SerializeWithStaticBinding: 00:00:00.2959728

You can see that reflection, by far is the slowest. It is slower than dynamic binding by almost two orders of magnitude. Dynamic binding is only about 2x slow as static binding. Keep in mind that this is for 10000000 repetitions, so it might not mean much in the grand scheme of things when it comes to performance. However realistically there is no clear gain to write long static binding code instead of using the much more compact dynamic code. I see the biggest gain in compact, clean and strongly typed code rather than performance when compared to other approaches.

Visitor Pattern Revisited with C# 4

The Visitor Pattern is a well known pattern that has good uses but so far been rather awkward or ugly to implement in C# versions prior to 4.

The visitor pattern is useful in scenarios in which you need to implement different behaviors for different objects within a class hierarchy, but you do not want to use virtual or abstract methods to achieve this. A good example is serialization logic for a hierarchy of objects. In general, if you want to adhere to the Single Responsibility Principle, you do not want your objects that hold domain data also to deal with algorithms for implementation of certain tasks.

Let us imagine that we have a class hierarchy as follows:

public abstract class Product {..}
public class Book : Product {..}
public class Record : Product {..}
public class Movie : Product {..}

Let us also say that we want to implement some kind of serialization for these classes. One way to do this is to add an abstract Serialize() method to Product class and then have each class override it. However this approach will break the Single Responsibility and Encapulsation principles for your objects, and even then, it might not be feasible as the objects might even be residing in a seperate assembly that is out of your control. The preferred way in this scenario to extend the behavior of your classes is to use a Visitor class that will implement the specific behavior for each class in the hierarchy.

An example:

public class ProductSerializer
{        
    public void Serialize(Book b) { }
    public void Serialize(Record r) { }
    public void Serialize(Movie m) { }
}

There is one thing missing here. There is no implementation for Serialize(Product p). As the C# is a statically typed language, when you have the following code, it will not compile:

Product product = new Book();
var serializer = new ProductSerializer();
serializer.Serialize(product);

Even though it is obvious that the product is of type Book, and there is an implementation of the Serialize method with Book as a parameter, due to static typing the compiler will bind the method during compilation and thus requires you to have a method with the signature Serialize(Product p). How would we go around implementing such a method in pre-C# 4 world? Here is where it gets ugly and awkward:

public void Serialize(Product p)
{
    if (p is Book) Serialize(p as Book);
    if (p is Record) Serialize(p as Record);
    if (p is Movie) Serialize(p as Movie);
}

This also means that you need to add methods for every new class in hierarchy as well. Image if the visitor pattern involved two arguments and then you would need to take into account all combinations of parameters. So is there a better way to do this?

C# 4.0 introduced dynamic keyword and dynamic dispatch, which we can utilize for a good cause in a scenario like this.

The above awkward and verbose implementation of Serialize can now be replaced with a much cleaner one as follows:

public void Serialize(Product p)
{
  Serialize((dynamic)p);
}

So how does this magic work? The dynamic keyword in itself is a type. The difference is that it tells the compiler to resolve the type during the runtime instead of on compile time. The compiler will bind the statically typed Serialize call above to always the same method, which is Serialize(Product p), no matter the actual type of the instance because the reference is of type Product. However, when p is cast as dynamic, the compiler defers the decision of choosing the proper method overload to runtime and will not care about what the reference is during compile time. It will see that the passed in parameter is a Book and as such will choose the most appropriate method during runtime. There is a certain performance hit, of course, but it is with most respects minimal and the benefit of having clean code is much higher.

Auto-ignore non existing destination properties with AutoMapper

By default, AutoMapper tries to map all properties of the source type to the destination type. If some of the properties are not available in the destination type it will not throw an exception when doing the mapping. However it will throw an exception when you are using ValidateMapperConfiguration().

Imagine if we have the following two types:

class SourceType
{
    public string FirstName { get; set; }
}

class DestinationType
{    
    public string FirstName { get; set; }
    public string LastName { get; set; }
}

ValidateMapperConfiguration() will throw the following exception when you Map SourceType to DestinationType:

AutoMapper.AutoMapperConfigurationException : The following 1 properties on DestinationType are not mapped: 
	LastName 
Add a custom mapping expression, ignore, or rename the property on SourceType.

You can override this behavior by making a small extension method to ignore all properties that do not exist on the target type.

public static IMappingExpression<TSource, TDestination> IgnoreAllNonExisting<TSource, TDestination>(this IMappingExpression<TSource, TDestination> expression)
{
    var sourceType = typeof(TSource);
    var destinationType = typeof(TDestination);
    var existingMaps = Mapper.GetAllTypeMaps().First(x => x.SourceType.Equals(sourceType)
        && x.DestinationType.Equals(destinationType));
    foreach (var property in existingMaps.GetUnmappedPropertyNames())
    {
        expression.ForMember(property, opt => opt.Ignore());
    }
    return expression;
}

Then it is possible to do the mapping as follows:

Mapper.CreateMap<SourceType, DestinationType>()
    .IgnoreAllNonExisting();

It is also possible to customize this method to your needs, by specifically ignoring properties which have a protected or private setter, for example.

ASP.NET MVC 3 Aspect Oriented Programming with Castle Interceptors

Aspect Oriented Programming, or AOP in short is a powerful tool that can be useful when developing any kind of software. Most kinds of complex applications have some kind of cross cutting concerns that are spread throughout the application. Some traditional examples of these include logging, exception handling, authorization and profiling / benchmarking. AOP is a way to separate these cross cutting concerns from the affected code. I liken AOP to a kind of declarative programming in contrast to imperative programming which is by far the most dominant programming paradigm today.

In a traditional imperative language, you would need to code by hand how and where you would like the logging to be done, such as by making a call to a to a LogManager of some sort to log a message. This means that your method that needs to be logged will need to contain the logging aspect inside it. With AOP you can do a more declarative statement such as “All calls to methods in classes inheriting from type T should be logged” without touching the actual method or class that you want to be logged.

Some languages support AOP out of the box, such as Python with its decorators. C# has not had any built-in support for it, although there are commercial libraries such as PostSharp that achieve this by injecting IL code to the assembly.

I want to talk about Castle.Windsor Interceptors which I think brings AOP to the masses within the .NET community. Castle.Windsor Interceptors work by proxying classes and allows you to act before and after a method is invoked on a target class.

I will show a simple example to integrate Castle Interceptors with a ASP.NET MVC 3 application and do logging through Interceptors. ASP.NET MVC 3 as a framework allows implementation of some cross cutting concerns through ActionFilters, but this might not always be enough for what you want and are only limited to Controllers and their Action methods. Windsor Interceptors allow for a much more powerful way to intercept method calls and act on them, on any of your classes.

I am using the following libraries / frameworks for this example (in the case that this article is still relevant after a few years..):

  • ASP.NET MVC 3
  • Castle Windsor 2.5.3
  • Castle Windsor Logging Facility
  • log4net 1.2

We will start by integrating Castle.Windsor into the ASP.NET MVC 3 project. There is an excellent tutorial available for this already, so I will cut it short. We will use the Fluent API and thus skip the .xml configuration. We start by creating an empty MVC 3 app and creating our ControllerFactory that will handle the lifecycle of our Controllers.

public class WindsorControllerFactory : DefaultControllerFactory
{
    private readonly IKernel kernel;

    public WindsorControllerFactory(IKernel kernel)
    {
        this.kernel = kernel;
    }

    public override void ReleaseController(IController controller)
    {
        kernel.ReleaseComponent(controller);
    }

    protected override IController GetControllerInstance(RequestContext requestContext, Type controllerType)
    {
        if (controllerType == null)
        {
            throw new HttpException(404,
                                    string.Format("The controller for path '{0}' could not be found.",
                                                    requestContext.HttpContext.Request.Path));
        }
        return (IController) kernel.Resolve(controllerType);
    }
}

We then plug this to application through Global.asax

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();
    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);
    RegisterCastle();
}

private void RegisterCastle()
{
    container = new WindsorContainer().Install(FromAssembly.This());
    ControllerBuilder.Current.SetControllerFactory(new WindsorControllerFactory(container.Kernel));
}

We then need to register the controllers using the Fluent API. For this let’s create a ControllerInstaller class:

public class ControllerInstaller : IWindsorInstaller
{
    public void Install(IWindsorContainer container, IConfigurationStore store)
    {
        container.Register(AllTypes.FromThisAssembly()
                                .BasedOn()
                                .If(Component.IsInSameNamespaceAs())
                                .If(t => t.Name.EndsWith("Controller"))
                                .Configure((c => c.LifeStyle.Transient)));
    }
}

From now on, Castle Windsor will take care of the lifecycle of our controllers.
Next step is to plug in our logging. You can use the Castle.Windsor Logging Facility for this, in this case I will use log4net. You need to add references to Castle.Facilities.Logging and also Castle.Services.Logging.Log4netIntegration and create an installer:

public class LoggerInstaller : IWindsorInstaller
{
    public void Install(IWindsorContainer container, IConfigurationStore store)
    {
        container.AddFacility(f => f.LogUsing(LoggerImplementation.Log4net).WithAppConfig());
    }
}

Add to the web.config the following as well to log all messages at Debug level and up to the debug window (during debugging). log4net has a lot of methods for declaring different kind of logging mechanisms, which I won’t go into detail here.

 <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" requirePermission="false" />
  </configSections>
<log4net debug="true">
    <root>
        <level value="DEBUG" />
        <appender-ref ref="TraceAppender" />
    </root>
    <appender name="TraceAppender" type="log4net.Appender.TraceAppender">
        <layout type="log4net.Layout.PatternLayout">
            <conversionPattern value="%date [%thread] %-5level %logger - %message%newline" />
        </layout>
    </appender>
</log4net>

Next up is creating our Interceptor class to log all Action methods. To do this we need to intercept the calls to OnActionExecuting and OnActionExecuted methods on the Controller which will invoke before and after the appropriate Action method, respectively. This is pretty similar to how custom ActionFilters work. There is a lot more you can do here, such as viewing the input parameters, handling exceptions, profiling performance and so forth.

public class ControllerLogInterceptor : IInterceptor
{
    private readonly ILogger logger;

    public ControllerLogInterceptor(ILogger logger)
    {
        this.logger = logger;
    }

    public void Intercept(IInvocation invocation)
    {
        switch (invocation.Method.Name)
        {
            case "OnActionExecuting":
                OnActionExecuting(invocation);
                break;
            case "OnActionExecuted":
                OnActionExecuted(invocation);
                break;
        }
        invocation.Proceed();
    }

    private void OnActionExecuted(IInvocation invocation)
    {
        var actionExecutedContext = invocation.Arguments[0] as ActionExecutedContext;
        logger.Debug("Executed action: " + invocation.TargetType.Name + "." +
                        actionExecutedContext.ActionDescriptor.ActionName);
    }

    private void OnActionExecuting(IInvocation invocation)
    {
        var actionExecutingContext = invocation.Arguments[0] as ActionExecutingContext;
        logger.Debug("Executing action: " + invocation.TargetType.Name + "." +
                        actionExecutingContext.ActionDescriptor.ActionName);
    }
}

The last step is to register our Interceptor and also configure controllers to use this interceptor.

public class LogInterceptorInstaller : IWindsorInstaller
{
    public void Install(IWindsorContainer container, IConfigurationStore store)
    {
        container.Register(Component.For<ControllerLogInterceptor>());
    }
}

To register the interceptor, change the ControllerInstaller as follows (notice the Interceptors.. part):

public void Install(IWindsorContainer container, IConfigurationStore store)
{
    container.Register(AllTypes.FromThisAssembly()
                            .BasedOn()
                            .If(Component.IsInSameNamespaceAs())
                            .If(t => t.Name.EndsWith("Controller"))
                            .Configure((c => c.LifeStyle.Transient.Interceptors<ControllerLogInterceptor>())));
}

Now you should get some log output in your application, according to your log4net config.

2011-06-02 21:20:35,279 [10] DEBUG MvcLogging.Castle.ControllerLogInterceptor - Executing action: HomeController.Index
2011-06-02 21:20:35,290 [10] DEBUG MvcLogging.Castle.ControllerLogInterceptor - Executed action: HomeController.Index

As you can see, we didn’t have to touch our Controllers at all to implement tracing of actions. This was a quite simple example to implement Logging in ASP.NET MVC 3 using AOP. There are more creative uses of AOP and I will look at those in a future article.

Better looking URLs in ASP.NET MVC 3

If you are like me and thought the conventions for action/controller names in ASP.NET MVC 3 were not great (capitalizations are evil), here’s a small trick to make them look nicer.

Let’s say we have a controller name that consists of more than two words, i.e. FooBarController and it has an action called “FooBaz“. Normally ASP.NET MVC will map the url as "~/FooBar/FooBaz". I don’t like this naming convention very much and would rather prefer "~/foo-bar/foo-baz". So how do we go on achieving this?

It seems that the normal routing options do not support this kind of behavior. You can add a custom route for every controller with a long name and use the [ActionName] attribute on actions. However I wanted to see if a more uniform solution was possible to achieve.

The first way that I investigated was writing my own IRouteHandler which allows you to change the default routing as you see fit. This requires you to write your own IHttpHandler, which is the main request processing pipe in ASP.NET MVC 3. Looking at the source of ASP.NET MVC 3 System.Web.Mvc.MvcHandler, this seemed doable by copy pasting and changing the way it resolves the controller name. This felt wrong however, as making a copy of the whole request processing pipe for a cosmetic task seemed like an overkill.

There seems to be a much easier way to achieve this, by using the ControllerFactory injection that is possible in ASP.NET MVC 3. As you might know, it is possible to override the behavior of ASP.NET MVC 3 for initializing controllers. As such, you can create your own DelimiterControllerFactory implementation.

public class DelimiterControllerFactory : DefaultControllerFactory
    {
        public override IController CreateController(RequestContext requestContext,
            string controllerName)
        {
            requestContext.RouteData.Values["action"] =
                requestContext.RouteData.Values["action"].ToString().Replace("-", "");
            return base.CreateController(requestContext, controllerName.Replace("-", ""));
        }
    }

You can then plug this controller in during your Application_Start() in Global.asax.

protected void Application_Start()
{
    AreaRegistration.RegisterAllAreas();
    ControllerBuilder.Current.SetControllerFactory(new DelimiterControllerFactory());
    RegisterGlobalFilters(GlobalFilters.Filters);
    RegisterRoutes(RouteTable.Routes);
}

Now when you go to /foo-bar/foo-baz it will resolve to FooBarController.FooBaz(). Keep in mind you need to rename your View directories to foo-bar instead of FooBar. It might be possible to have the old behavior by overriding the View() method on the controller, however this is just about how you name your folders internally in your project.

Command and Query Handlers

In my previous post, I talked about some Command and Query handlers. The implementation I made is in .NET using C#. Here is what a command handler looks like:

public interface ICommandHandler<in TCommand> where TCommand : ICommand
{
     void Handle(TCommand command);
}

public class DeleteUserCommand : ICommand
{
    public Guid Id { get; set; }
}

public class DeleteUserCommandHandler : ICommandHandler<DeleteUserCommand>
{
    private readonly ISession _session;

    public DeleteUserCommandHandler(ISession session)
    {
        _session = session;
    }

    public void Handle(DeleteUserCommand command)
    {
        //handle command  
    }
}

There is also a command dispatcher that will dispatch the necessary handlers to execute the commands. This one does it synchronously, but it can be turned async quite easily.

public class DefaultCommandBus : ICommandBus
{
   ...
   public void SubmitCommand<TCommand>(TCommand command) where TCommand : ICommand
   {
      var handler = _commandHandlerFactory.GetHandler<TCommand>();
      try
      {
          handler.Handle(command);
      }
      finally
      {
          _commandHandlerFactory.ReleaseHandler(handler);
      }
   }
}

CommandHandlerFactory simply resolves the required handler using a DI container and is also reponsible for releasing the handler.

On the query side, there is a QueryService that executes the query and returns the result. This makes some use of the dynamic keyword trickery to provide a strongly typed API.

public interface IQuery<TResult> : IQueryBase {}

public interface IQueryHandler<in TQuery, out TResult> : IQueryHandler where TQuery : IQuery<TResult>
{
   TResult Execute(TQuery query);
}

public class QueryService
{
  public TResult ExecuteQuery<TResult>(IQuery<TResult> query)
  {
     var handlerType = typeof (IQueryHandler<,>).MakeGenericType(query.GetType(), typeof (TResult));
     var handler = _container.Resolve(handlerType);
     try
     {
        return (TResult)((dynamic)handler).Execute( (dynamic)query);
     }
     finally
     { 
        _container.Release(handler);
     }
  }
}

You can simply use the query as follows, and it will return a strongly typed result.

var query = new SearchUsersQuery { SearchTerm = "term" };
var result = _queryService.ExecuteQuery(query);

Command and Query Seperation

Command and Query Responsibility Separation, or CQRS, has been a term popping up quite often recently. It is definitely not the solution to everything, but it certainly makes some kind of solutions better. There are some frameworks to make the implementation easier, such as NCQRS and it certainly is very interesting, but it’s still very much in infancy.

If you would go with the full CQRS + Event Sourcing solution, depending on the experience and skill level of the developers the project might end up being much more costly than expected, even if CQRS + Event Sourcing offers superior architecture in many cases and can be applied to many problems, it doesn’t mean it’s always the correct answer.

I recently did an implementation of a CQRS-inspired solution, without using the full blown route using event sourcing. The requirements were as follows:

  • All the queries need to be filtered according to user’s privilege level.
  • All the commands need to pass through an authorization layer.
  • All changes to system need to be logged, at the command level.
  • It should be possible to expose all the functionality of the system through simple APIs.

Applying the CQRS principle in this case makes it easier write Query and Command Handlers that are able to deal with these kind of requirements. There are two sets of objects, Commands and Queries, each with corresponding handlers. QueryHandlers work synchronously and respond to Query objects. CommandHandlers as per CQRS do not return any results, and can work either synchronously or asynchronously, and process Command objects. QueryHandlers are not allowed commit any transactions (though this is not enforced by the code at this point). QueryHandlers work with a filtering layer that filter any results returned. CommandHandlers work with an authorization layer that authorizes every command in terms of a user context. Commands succeed as a whole, or fail as a whole. Both types of handlers work against the same data store. The query and command layers are the only possible interaction with the underlying data store, the only notable exception being the “authentication” layer. Performance isn’t an issue at this point so there hasn’t been any need to go with a different “Read Model” which offers a denormalized view.

When exposing APIs, it is quite easy to expose all the Query and Commands in a straightforward way and have all the logic for filtering and authorization still intact.

Of course all of this can be achieved by simpling turning everything into Requests and writing Requests Handlers or doing standard CRUD, but having the seperation makes it possible to make all commands function asynchronously or add some kind of caching layer for the queries later on.