Azure Service Bus Messaging

Microsoft Azure Service Bus supports two distinct messaging patterns: Azure Relay and Service Bus Brokered Messaging. This article is an introduction to Brokered Messaging.
Brokered Messaging can decouple the communication between applications by introducing an asynchronous messaging system. The sender applications put the requests on a queue and the receivers will pick them up and process them whenever they are available. This way if the receiver application is temporarily busy or unavailable the sender application doesn’t fail or get blocked.

Queues

A queue is an entity with a head and a tail. Senders put the messages on the head and receivers pick them up from the tail.

sb-queues-08

Once a message is processed by a receiver it will no longer be available on the queue for other receivers. When there are multiple message receivers, the message load is distributed between them in a load balancing manner.

Topics and Subscriptions

Topics and Subscriptions can be thought as queues with one head but multiple tails. The head is called a Topic and each tail is called a Subscription. When a sender sends a message to a Topic, Service Bus creates multiple copies of it for each subscription. If there are several receivers interested in receiving the message, they can each listen on a different subscription and get a distinct copy of the message. However, if they listen on the same subscription, only one of them will be able to receive the message.

sb-topics-01

The Subscriptions can specify filters. This way each subscription only exposes the messages that match its filter.

Service Bus Namespace

A namespace is the way to categorize the queues and topics. You cannot create any service bus entities unless you have created a namespace first. A company may want to have different namespaces for different projects or teams. Creating a namespace is easy, you just need to go to Azure portal and follow the instructions. After the namespace is created it can be referred to, using its connection strings.

servicebus_overview

Creating Service Bus entities (Queues, Topics and Subscriptions) is also as easy as creating a Namespace. Just follow the instructions on Azure portal.

You can also manage Service Bus entities programmatically using the NamespaceManager class.

Client API

In your project install WindowsAzure.ServiceBus Nuget package to get the Service Bus client library. Here are a few code snippets to get the basic things done:

Basic Queues operations

// Creating a Queue client
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);

// Creating a message
var message = new BrokeredMessage("Hello");

// Sending a message
client.Send(message)

// Receiving a message
client.OnMessage(message =>
{
    // Processing the message here...
});

Basic Topics operations

// Creating a Topic client
var topicClient = TopicClient.CreateFromConnectionString(connectionString, topicName);

// Sending a message
topicClient.Send(message)

// Create a Subscription client
var subscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName);

// Receiving a message
subscriptionClient.OnMessage(message =>
{
    // Processing the message here...
});

Messaging Factory

Another way of creating messaging clients (QueueClient, TopicClient or SubscriptionClient) is using a MessagingFactory. It allows us to create the clients in a more abstracted way and thinking of them as senders and receivers. It is more flexible and makes it easier to change the messaging strategy without having to update the code. For example, if your application is currently using Queues and you decide to switch to Topics/Subscriptions, all you need to do is updating your endpoints if you have used messaging factory to create the clients.

var factory = MessagingFactory.CreateFromConnectionString(ConnectionString);

var messageReceiver = factory.CreateMessageReceiver("entityPath");

var messageSender= factory.CreateMessageSender("entityPath");

Message Lifetime

When a message receiver is created it specifies a Receive Mode. There are two receive modes:

  • Receive and Delete: In this mode as soon as the receiver picks up the message, the message gets deleted from the queue. If the receiver crashes before being able to process the message, it will be lost. This At-Most-Once delivery is guaranteed.
  • PeekLock: This is the default mode and guarantees At-Least-Once delivery. When the receiver picks up the message in this mode, the service bus puts a lock on the message and hides it from other receivers, but the message still exists on the queue until one of the client completes processing the message. If the client crashes before that, the message will be visible on the queue again after the lock expires. The default lock duration is 30 seconds and it can be set on the receiver (queue/subscription).

After the message receiver is done with the message, it should notify the service bus by calling one of the following methods:

  • Complete(): The message was successfully processed and it should be deleted from the queue.
  • Abandon(): The receiver doesn’t know how to process the message and it returns the message to the queue to make it available for other receivers.
  • Differ(): The receiver doesn’t want to process the message just yet, so the message will be returned to the queue but it will not be delivered to any receiver unless they explicitly request for it by its sequence number.
  • DeadLetter(): The receiver has encountered an error processing the message and marks it as a dead-letter. Dead-letters are sent to a sub-queue within the same queue.

The following code shows how to read a message from the dead-letter queue:

var factory = MessagingFactory.CreateFromConnectionString(ConnectionString);
var deadletter SubscriptionClient.FormatDeadLetterPath(TopicName, SubscriptionName);
var deadletterClient = factory.CreateMessageReceiver(deadletter);
deadletterClient.OnMessage(message =>
{
    // Process dead-letter message here
});

A queue or subscription can be configured to move expired or erroneous messages to a dead-letter queue too.
servicebus_edit_subscription

The OnMessage method automatically marks any received messages completed. If the client is going to take care if this itself, the AutoComplete should be disabled first:

messageReceiver.OnMessage(message =>
{
    try
    {
        // if message processed successfully
        message.Complete();
    }
    catch (Exception exception)
    {
       // If message has an error
        message.DeadLetter();
    }

}, new OnMessageOptions
{
    AutoComplete = false
});

The longest a message can live on a queue before it is processed is determined by its time-to-live afterwhich the message gets deleted automatically. Time-to-live can be specified on the queue, topic, subscription or the message itself. A message can only live as long as the minimum time-to-live specified on its queue, topic,
subscription or the message itself.

var message = new BrokeredMessage
{
    TimeToLive = TimeSpan.FromMinutes(10)
};

Duplicate Detection

When sending a message, if an error happens and the client cannot send the message, it usually retries sending and this may lead to having duplicate messages on the system. A client also re-sends the message if it doesn’t receive any acknowledgement from the queue. Enabling duplicate detection keeps track of the MessageId property of all messages. If a new message with identical MessageId as an existing one is detected within a time window (duplicate detection history), it will be ignored and dropped.

Duplication detection can be enabled on Queue or Topic when creating them either in the Azure portal or programmatically by setting the RequiresDuplicateDetection property. The duplicate detection history window can also be set on a Queue and Topic. In the code set DuplicateDetectionHistoryTimeWindow on a QueueDescription or TopicDescription.
The duplicate detection window should be kept as small as feasible, because large window size means keeping track of more message-Ids and it impacts the throughput.

Filters, Rules and Actions

A subscription can be created with a filter so that it delivers only a subset of messages to subscription clients. A CorrelationFilter allows for a basic filtering based on the message properties like MessageId, CorrelationId, etc.

namespaceManager.CreateSubscription(topicPath, subscriptionPath,
    new CorrelationFilter { Label = "red", CorrelationId = "high"});

You can also specify a SqlFilter which allows for Sql-like expressions:

namespaceManager.CreateSubscription(topicPath, subscriptionPath,
    new SqlFilter("color = 'blue' AND quantity = 10"));

When creating a subscription, if a filter is not explicitly specified, the subscription will be created by default with a TrueFilter which means the subscription accepts all messages.
A subscription can also be created with Rule. A rule executes an Action on the messages matching the filter.

namespaceManager.CreateSubscription(topicPath, subscriptionPath,
new RuleDescription
{
    Name = "RedRule",
    Filter = new SqlFilter("color = 'red'"),
    Action = new SqlRuleAction(
        "SET quantity = quantity / 2;" +
        "REMOVE priority;" +
        "SET sys.CorrelationId = 'low';")
});

In the above example, the rule will run a SqlAction on any messages whose ‘color’ property is ‘red’ and as a result of the action, two properties of the message will be set and one property will be removed. Read more here.

Shared Access Policies

Every messaging entity is associated with a set of access policies that determine what you can do with entity. When an entity is created it comes with a default policy named RootManageSharedAccessKey which gives you full access (Manage, Send, Receive) to the entity.
shared-access
Each policy has a primary key and a secondary key both of which can be used to connect to the entity. If a key is compromised, you should regenerate it and update all clients to use the new key. The reason there two keys is that you switch to the other key before regenerating one to avoid a downtime in your system.
edit-policy
It is not recommended to use your Root manage accessed key everywhere in your system. You should create custom policies with reasonable access levels (Send, Listen or Manage) and use it in clients.

Auto-forwarding

You can use auto-forwarding to scale out an individual topic. Service Bus limits the number of subscriptions on a given topic to 2,000. You can accommodate additional subscriptions by creating second-level topics. Note that even if you are not bound by the Service Bus limitation on the number of subscriptions, adding a second level of topics can improve the overall throughput of your topic.

Auto-forwarding scenario

You can enable auto-forwarding by setting the QueueDescription.ForwardTo or SubscriptionDescription.ForwardTo properties on the source queue/subscription:

var srcSubscription = new SubscriptionDescription (srcTopic, srcSubscriptionName);
srcSubscription.ForwardTo = destTopic;
namespaceManager.CreateSubscription(srcSubscription));

The destination entity must exist at the time the source entity is created. Lear more…


References:
Microsoft Azure service bus messaging documentation
Microsoft Azure Service brokered messaging course on Pluralsight
Southworks by Jorge Rowies
Cloudcasts by Alan Smith
Github samples

 

Advertisements

Repository Pattern

Definition

Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes.

Objectives

Use the Repository pattern to achieve one or more of the following objectives:

  • You want to maximize the amount of code that can be tested with automation and to isolate the data layer to support unit testing.
  • You access the data source from many locations and want to apply centrally managed, consistent access rules and logic.
  • You want to implement and centralize a caching strategy for the data source.
  • You want to improve the code’s maintainability and readability by separating business logic from data or service access logic.
  • You want to use business entities that are strongly typed so that you can identify problems at compile time instead of at run time.
  • You want to associate a behavior with the related data. For example, you want to calculate fields or enforce complex relationships or business rules between the data elements within an entity.
  • You want to apply a domain model to simplify complex business logic.

 

Best Practices

  1. Do not use IQueryable<T> as the return type of query methods. Because it creates tight coupling and you get no benefit over using the OR/M directly. Use more generic types like IEnumerable<T>.
  2. Create repositories for domain aggregates only not for every single entity. If you are not using Aggregates in a Bounded Context, the repository pattern may be less useful.
  3. Creating base types for repositories is not recommended. If it is needed make sure not breaking the Interface Segregation Principle.
  4. It is not recommended to create generic repositories, make sure you have a good reason if you want to create one.
  5. Repositories must return domain entities which are usable in the domain services without having to be converted, so any data contracts must be converted to domain entities before the repository returns them.
  6. Apart from CRUD operations, repositories can also have identity generator, count methods or finder methods.
  7. A repository should only have the minimum required operations. It doesn’t even have to include all the four of CRUD operations. It can be as simple as a repository with only one operation.
  8. It is OK to have bulk methods for adding or removing multiple items to or from the repository.
  9. If you find that you must create many finder methods supporting use case optimal queries on multiple Repositories, it’s probably a code smell. First of all, this situation could be an indication that you’ve misjudged Aggregate boundaries and overlooked the opportunity to design one or more Aggregates of different types. However, if you encounter this situation and your analysis indicates that your Aggregate boundaries are well designed, this could point to the need to consider using CQRS.
  10. Do not design a Repository to provide access to parts that the Aggregate Root would not otherwise allow access to by way of navigation.
  11. Place the repository interface definition in the same Module (assembly) as the Aggregate type that it stores. The implementation class goes in a separate Module. The following image shows how the dependencies should be 
  12. Generally speaking, there is a one-to-one relationship between an Aggregate type and a Repository.
  13. If the aggregate includes large members, for example, big lists of items, you can consider lazy loading.
  14. The only responsibility of a repository is data persistence, so no business (domain) logic should be added to a repository. The operations are CRUD only. This also applies to the name of the methods.

References

Dynamic Deserialization

Binary deserialization fails with a SerializationException saying ‘Unable to find assembly’ if you try to deserialize an object whose type is not referenced in the current application domain.

The easiest way to resolve this is to reference the assembly containing the required type. This way the BinaryFormatter will return an object of the same type and all you need to do is a simple cast:

var formatter = new BinaryFormatter();
var obj = (SourceType)formatter.Deserialize(stream);

However, this approach may not be possible for many reasons: You may not have access to the source assembly, the assembly may be targeting a different platform, etc. In such cases the solution is to create proxy types which have similar structure to the source type. Your BinaryFormatter then needs to be configured to deserialize the stream to a proxy type rather than the original one. This is done by creating a custom SerializationBinder:

sealed class MySerializationBinder : SerializationBinder
{
    public override Type BindToType(string assemblyName, string typeName)
    {
        return Type.GetType(String.Format(&quot;{0}, {1}&quot;, typeName, PROXY_ASSEMBLY_NAME);
    }
}

You then need to assign it to the formatter:

var formatter = new BinaryFormatter
{
    Binder = new MySerializationBinder ()
};

But there are still cases where you cannot even create proxy assemblies because the structure of the original types may be completely unknown to you. In such cases you should create a Descriptor Proxy which can represent a deserialized object in a dynamic way:

[Serializable]
class ObjectDescriptor : ISerializable
{
    private readonly SerializationInfoEnumerator enumerator;

    [SecurityPermission(SecurityAction.Demand, SerializationFormatter = true)]
    void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
    {
    }

    [SecurityPermission(SecurityAction.Demand, SerializationFormatter = true)]
    private ObjectDescriptor(SerializationInfo info, StreamingContext context)
    {
        this.enumerator = info.GetEnumerator();
    }

    public IEnumerable&lt;SerializationEntry&gt; GetMembers()
    {
        this.enumerator.Reset();
        while (this.enumerator.MoveNext())
        {
            yield return this.enumerator.Current;
        }
    }
}

The descriptor object can then be used to create a dynamic object:

public dynamic CreateDynamicObject(ObjectDescriptor descriptor)
{
    var nameRegex = new Regex(&quot;&lt;(.+)&gt;&quot;);

    IDictionary&lt;string, object&gt; expando = new ExpandoObject();

    foreach (SerializationEntry member in descriptor.GetMembers())
    {
        string memberName = nameRegex.IsMatch(member.Name) ? nameRegex.Match(member.Name).Groups[1].Value : member.Name;
        expando[memberName] = member.Value is ObjectDescriptor complexMember ? CreateDynamicObject(complexMember) : member.Value;
    }

    return (ExpandoObject)expando;
}

Find a complete sample project implementing the above solution at this GitHub repository.

Change The Author of Git Commits

  1. Identify one revision before the commit you need to update. In the following example, the last two commits are going to be updated.
    CommitLogs
  2. Run git rebase command with the identified commit number:
    git rebase -i 2778751
  3. In the editor, for each commit you need to update, change the word pick to edit. (Make sure you know how to use the Vim editor)
  4. Run the following command to set the author to the current configured git user:
    git commit --amend --reset-author --no-edit
    Or use the following to set it to a custom author:
    git commit --amend --no-edit --author="Author Name <email@address.com>"
  5. Run git rebase --continue
  6. Repeat step 4 and 5 until all of the commits are updated.

Using Ninject in ASP.NET MVC applications

The following article is part of the chapter 4 of my book, Mastering Ninject for Dependency Injection, [PACKT] publishing, 2013.


In this article, we will implement the Northwind customers scenario using Ninject in an ASP.NET MVC 3 application. Ninject MVC extension also supports other versions of the MVC framework. (Download the source code)

Add a new ASP .NET MVC3 Web application to the Northwind solution with references to the Northwind.Core project and Ninject Conventions extension.
We should also reference Ninject MVC extension. The easiest way is to install the Ninject.Mvc3 package using NuGet. Keep in mind that although the name of the package is Ninject.Mvc3, it is upward compatible with newer versions of the MVC framework. Alternatively, we could download the Ninject.Web.Mvc binary from GitHub and reference it from our project. In this case, we also needed to reference the Ninject and Ninject.Web.Common libraries. These libraries are referenced automatically if we install the Ninject.Mvc3 package using NuGet. NuGet also adds the App_Start folder containing a file named NinjectWebCommon.cs to the project. The NinjectWebCommon class contains a Start() method that will be the starting point of the application. It sets up everything to hook Ninject to MVC framework. So, once we have installed Ninject MVC extension using NuGet, we do not need to do anything else to set up our application, and everything will be already there for us. This is because NuGet allows packages to add files to the project as a part of their installation process. The NinjectWebCommon class has a method named CreateKernel which can be used as the Composition Root to register our services. We will talk more about this class in the next section.
If we reference Ninject libraries manually using separately downloaded binary files, we should make the following changes to the MvcApplication class located under the Global.asax file:

  1. The MvcApplication class should derive from the NinjectHttpApplication class, rather than System.Web.HttpApplication.
  2. Instead of having the Application_Start method as the starting point, we should override the OnApplicationStarted method, and anything within Application_Start should go to OnApplicationStarted.
  3. We should override CreateKernel and use it for service registration.

The following code shows the CreateKernel method:

protected override IKernel CreateKernel()
{
    var kernel = new StandardKernel();
    kernel.Load(new ServiceRegistration());
    return kernel;
}

Even if we use NuGet to set up our project, we can delete the App_Start folder, and use Global.asax as described previously. In this case, we can remove references to the WebActivator and Microsoft.Web.Infrastructure libraries that NuGet has created. It is up to you which approach to use, as they do exactly the same thing in two different ways. The first one is easier to use and does not need any extra efforts to set up the application; while the second one uses the existing Global.asax file as the application startup point and doesn’t require additional files or libraries to be referenced. In this example, we use the Global.asax file as starting point. In the next section, we will use the App_Start and NinjectWebCommon classes which NuGet creates.
Let’s start implementing the presentation of customers list in our application. Open the HomeController class and add a constructor which introduces our ICustomerRepository interface as its parameter:

private readonly ICustomerRepository repository;

public HomeController(ICustomerRepository repository)
{
    this.repository = repository;
}

The next step is to modify the Index action method as follows:

public ActionResult Index()
{
    var customers = repository.GetAll();
    return View(customers);
}

It uses the ICustomerRepository interface to populate customers. Please note that we don’t need to create a new Model for customer and the one that we have already created in our domain layer is being used here. Then delete the existing Index.cshtml View and add a new one with List scaffold template and our Customer domain model as the Model class.
Now add the Create action methods as follows:

public ActionResult Create()
{
    return View();
}

[HttpPost]
public ActionResult Create(Customer customer)
{
    if (ModelState.IsValid)
    {
        repository.Add(customer);
        return RedirectToAction("Index");
    }
    return View();
}

The first one is called when the hyperlink Create New is clicked using HTTP GET method and the second one is called when the Create View is submitted using HTTP POST method. The created customer Model is passed to the Create method and can be added to the repository. Checking the ModelState.IsValid property is for server-side validation. We can now add a Create View for this action with Core.Customer as Model class and the Create scaffold template.

Validator injection

Now we need to add some validation rules to our Customer Model class. MVC framework supports different kinds of validation including annotation-based validation in which we use validation attributes on the properties of the Model to define the validation rules:

public class Customer
{
    [Required]
    public string ID { get; set; }
    [Required]
    public string CompanyName { get; set; }
    public string City { get; set; }
    public string PostalCode { get; set; }
    [StringLength(10)]
    public string Phone { get; set; }
}

The validation attributes are not part of MVC library and this makes it possible to apply them to our Customer Model within our domain layer. This way, we can share these validation rules among other UI frameworks as well. We just need to reference the System.ComponentModel.DataAnnotations library in our domain project. MVC framework automatically validates the Model based on the provided attributes. But these attributes are limited to basic validation rules. What if we need to check whether the provided ID for our customer is not duplicated? In such scenarios we need to create our custom validation attributes:

public class UniqueCustomerIdAttribute : ValidationAttribute
{
    [Inject]
    public ICustomerValidator Validator { get; set; }

    public override bool IsValid(object value)
    {
        if (Validator == null)
        {
            throw new Exception("Validator is not specified.");
        }
        if (string.IsNullOrEmpty(value as string))
        {
            return false;
        }
        return Validator.ValidateUniqueness(value as string);
    }
}

By deriving from ValidationAttribute and overriding its IsValid method, we can define a custom validation attribute. This validator uses an object of the type ICustomerValidator to validate the given value which is a customer ID across the repository to check whether it is unique or duplicated. The following is the implementation of ICustomerValidator:

public class CustomerValidator : ICustomerValidator
{
    private readonly ICustomerRepository repository;

    public CustomerValidator(ICustomerRepository repository)
    {
        this.repository = repository;
    }

    public bool ValidateUniqueness(string customerID)
    {
        return repository.Get(customerID) == null;
    }
}

Validation is successful, provided the repository cannot find any existing customer with the given customer ID.
You may have noticed that in the UniqueCustomerIdAttribute class we didn’t introduce the ICustomerValidator interface as a dependency in the constructor. The reason is that it is not possible to apply an attribute which requires constructor parameters without providing its arguments. That’s why we used the Property Injection pattern, rather than Constructor Injection. Although this attribute should be instantiated by MVC framework, Ninject can inject the dependency before the IsValid method is called. Now you may be wondering that applying the [Inject] attribute in our domain layer, will make it dependent on Ninject. Well, it didn’t, because we didn’t use the Ninject version of the [Inject] attribute. Instead, we created another InjectAttribute class in our Core library. We discussed about how to set up Ninject to use another attribute instead of its internal [Inject] attribute in Chapter 3, Meeting real-world Requirements. This way we don’t have to have a reference to the Ninject library, and can even replace Ninject with other DI containers without needing to touch the domain layer.
We can now add the UniqueCustomerIdAttribute attribute to the validation rules of our Customer Model:

[Required, UniqueCustomerId]
public string ID { get; set; }

Filter injection

Filters are implementations of the IActionFilter, IResultFilter, IExceptionFilter, or IAuthorizationFilter interfaces that make it possible to perform special operations while invoking an action method. ASP.NET MVC allows us to apply filters in two ways, both of which are supported by Ninject:

  • Applying a filter attribute to the Controller or an Action method. This approach has been supported by MVC framework since its first version and doesn’t fully support DI.
  • Applying filters without attributes using filter providers which is introduced in MVC 3 and supports all DI patterns.

In the first method, the filter class derives from ActionFilterAttribute and the created filter attribute will be applied to a Controller or one of its action methods. Like other attributes, a filter attribute cannot be applied if it does not have a default constructor, so we cannot use Constructor Injection in filter attributes. However, if we use Property Injection using the [Inject] attribute, the dependencies get injected by Ninject without any special configuration. The following example shows how to define an action filter attribute which can pass action information to a performance monitoring service:

public class PerformanceFilterAttribute : ActionFilterAttribute
{
    [Inject]
    public IPerformanceMonitoringService PerformanceMonitor
    { get; set; }

    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        PerformanceMonitor.BeginMonitor(filterContext);
    }

    public void OnActionExecuted(ActionExecutedContext filterContext)
    {
        PerformanceMonitor.EndMonitor(filterContext);
    }
}

The implementation of IPerformanceMonitoringService will be injected by Ninject into the property PerformanceMonitor.
MVC3 or later versions of MVC, however, support a new way of applying filters which is DI compatible and allows all DI patterns including Construction Injection. Thus, the previous approach is not recommended in MVC3+.
The following example demonstrates how to define and apply LogFilter to our actions, which can log some tracking information about the called or being called action methods. The filter uses the ILog interface of the Log4Net library as a dependency:

public class LogFilter : IActionFilter
{
    private readonly ILog log;
    private readonly Level logLevel;

    public LogFilter(ILog log, string logLevel)
    {
        this.log = log;
        this.logLevel = log.Logger.Repository.LevelMap[logLevel];
    }

    public void OnActionExecuting(ActionExecutingContext filterContext)
    {
        var message = string.Format(
CultureInfo.InvariantCulture,"Executing action {0}.{1}",            filterContext.ActionDescriptor.ControllerDescriptor.ControllerName, filterContext.ActionDescriptor.ActionName);
       this.log.Logger.Log(typeof(LogFilter), this.logLevel, message, null);    }

    public void OnActionExecuted(ActionExecutedContext filterContext)
    {
        var message = string.Format(
  CultureInfo.InvariantCulture, "Executed action {0}.{1}",
filterContext.ActionDescriptor.ControllerDescriptor.ControllerName, filterContext.ActionDescriptor.ActionName);
 this.log.Logger.Log(typeof(LogFilter),
 this.logLevel, message, null);
    }
}

The LogFilter class uses the provided filterContext argument to determine the name of the Action method and its enclosing Controller. It then uses the injected implementation of ILog to log the tracking information. This class introduces two dependencies, one of which is the ILog interface and the other one is the log level under which the messages should be logged.
In order to tell MVC to use Ninject to resolve a filter, we need to register the filter using the BindFilter method of Kernel:

Kernel.BindFilter<LogFilter>(FilterScope.Action, 0)
     .WithConstructorArgument("logLevel", ("Info");

The first parameter defines the filter scope whose type is System.Web.Mvc.FilterScope and the second one is a number defining the order of the filter. This information is required by MVC to instantiate and apply filters. Ninject collects these information and asks MVC on our behalf to create an instance of the given filter type and apply it to the given scope. In the previous example, LogFilter will be resolved using Ninject with “Info” as an argument for the logLevel parameter and will be applied to all of the Action methods.
The ILog log parameter will be resolved based on how we register ILog. If you have used Log4Net before, you will remember that it can associate each logger to the type of class for which the logger is used:

public class MyClass
{
    private static readonly ILog log =
                       LogManager.GetLogger(typeof(MyApp));
}

This way, the logs can later be filtered based on their associated types.
In order to provide the required type for our logger, we bind it to a method rather than a concrete service. This way we can use the context object to determine the type of object requiring the log:

Bind<ILog>().ToMethod(GetLogger);

The following is the code for the GetLogger method:

private static ILog GetLogger(IContext ctx)
{
    var filterContext = ctx.Request.ParentRequest.Parameters
                    .OfType<FilterContextParameter>().SingleOrDefault();
    return LogManager.GetLogger(filterContext == null ?
        ctx.Request.Target.Member.DeclaringType :
        filterContext.ActionDescriptor.ControllerDescriptor.ControllerType);
}

In the previous code, the context.Request is the request for resolving ILog and ParentRequest is the request for resolving LogFilter. When a filter class is registered using BindFilter, Ninject provides the request with a parameter of type FilterContextParameter, which contains information about the context of the object to which the filter is being applied, and we can then obtain the type of the Controller class from it. Otherwise, this parameter is not provided, which means the logger is not requested by a filter class, in which case, we just use the type of the class requiring the logger.

Conditional filtering (When)

Now what if we are not going to apply the filter to all of the Controllers or the action methods? Ninject provides three groups of the WhenXXX methods to determine in which conditions to apply the filter:

  • WhenControllerType: This method applies the filter to the specified Controller types only.
  • WhenControllerHas: This method applies the filter to those Controllers with specified attribute type
  • WhenActionMethodHas: This method applies the filter to those Action methods with the specified attribute type

Apart from the mentioned three groups, Ninject offers a generic When method which can be used to define any custom conditions which cannot be applied using the previous methods.
The following code shows how to apply LogFilter to those action methods which have LogAttribute, given that the LogAttribute class is a simple class deriving from the Attribute class:

Kernel.BindFilter<LogFilter>(FilterScope.Action, 0)
 .WhenActionMethodHas<LogAttribute>()
 .WithConstructorArgument("logLevel", ("Info");

This is another example that shows how to apply this filter to all of the actions of the HomeController class:

Kernel.BindFilter<LogFilter>(FilterScope.Controller, 0)
 .WhenControllerType <HomeController>()
 .WithConstructorArgument("logLevel", ("Info");

Contextual arguments (With)

In the previous examples, we have always used a constant “Info” argument to be passed to the logLevel parameter. Apart from the standard WithXXX helpers, which can be used on normal bindings, Ninject provides the following WithXXX methods especially for filter binding:

  • WithConstructorArgumentFromActionAttribute: It allows to get the constructor argument from the attribute which is applied to the action method
  • WithConstructorArgumentFromControllerAttribute: It allows to get the constructor argument from the attribute which is applied to the Controller class
  • WithPropertyValueFromActionAttribute: In case of Property Injection, it allows to set the property using a value from the attribute which is applied to the action method
  • WithPropertyValueFromControllerAttribute: In case of Property Injection, it allows to set the property using a value from the attribute which is applied to the Controller class

In the following code, we get the log level from the LogAttribute class rather than always passing a constant string to the logLevel parameter:

Kernel.BindFilter<LogFilter>(FilterScope.Action, 0)
    .WhenActionMethodHas<LogAttribute>()
    .WithConstructorArgumentFromActionAttribute<LogAttribute>(
        "logLevel",
        attribute => attribute.LogLevel);

The previous code requires the LogAttribute class to have the LogLevel property:

public class LogAttribute : Attribute
{
    public string LogLevel { get; set; }
}

Custom awaitable and awaiter types in C# 5.0 asynchronous

In C# 5.0 you can use the await keyword to await an awaitable object. But what is the minimum requirement for an object to be considered awaitable by C#? Interestingly there are no interfaces to be implemented in order to be considered awaitable!
Having a parameter-less method named GetAwaiter() which returns an awaiter object is the only convention for a type to be recognized awaitable by C#:

// This class is awaitable because it has the GetAwaiter method, provided MyAwaiter is an awaiter.
public class MyWaitable
{
    public MyAwaiter GetAwaiter()
    {
        return new MyAwaiter();
    }
}

Now the question is how to make a type awaiter? The minimum requirements for a type to be awiater are implementing the INotifyCompletion interface, and having the IsCompleted property and the GetResult method with the following signatures:

public class MyAwaiter : INotifyCompletion
{
    public void OnCompleted(Action continuation)
    {
        
    }

    public bool IsCompleted
    {
        get
        {
            return false;
        }
    }


    public void GetResult()
    {
       
    }
}

Logging Best Practices

In this article I will cover the best practices of logging in terms of the following concerns, regardless of what logging framework we use:
1- What to log
2- How to choose logging levels
3- Where to log

What to log

First of all logging is not a constant forward process. You don’t have to log all you need and then stop touching it. It can be an ongoing process and you can refactor your logging codes as your needs. Although we don’t have to write every possible event of our application upfront just in case we might have need it in future, there are some minimum events that we should log at first place.
All errors and critical conditions should be logged. It doesn’t mean that we should log an error in every Catch clause, because whether or not the caught exception is an error to log, depends on what that exception is and how you are going to handle it. For example your Catch clause might have caught a predicted “Access Denied” exception and have made the appropriate decision. This is not considered as an error but a lower log level like Debug never harms. Any unusual circumstances should be logged even if they don’t effect the application. Consider that the last example is not considered as an unusual circumstance either. It is also a good idea to log the starting point and end point of every process. Ideally if we have singly responsible methods, that would be logging at the beginning and end of each method. We don’t have to log every single parameter of each method but we can log some critical information such as endpoint URLs, connection strings or IDs. We can add more debugging logs later on if we need to. Never worry about too much logs, because almost all of the logging frameworks allow you to disable your logs partially. You can filter them based on the type for which you have logged or the level of your logs. So it is important to name your loggers after their surrounding types or at least give them a meaningful name, as the logger names will later be used for grouping and filtering the logs. It is also crucial to choose appropriate logging levels.

How to choose logging levels

I will never use Info logging level for Debug purposes.
Although logging frameworks allow us to have as many custom logging levels as we need, there are usually several default logging levels that are mostly used:
Fatal: Usually the highest level. Used when a problem stops the application’s functionality. This kind of errors usually happen because of unexpected circumstances for which no alternative solutions are provided. For example, no disk space, network disconnection or any unexpected circumstances which stops the application.
Error: This level is used when an error occurs but it is handled and it doesn’t interrupt the application. It may cause some minor problems or may lead to some consequences in the future. For example a connection to the master database is refused and we redirected the request to the slave to handle the issue.
Warn: For logging unusual circumstances that need consideration or they may lead to future problems. Low disk space is an example which may needs a warning level log.
Info: We use this logging level for monitoring the application’s process. It should be so that when someone reads the logs they can realize what is going on in the application. They shouldn’t contain debugging data or repeated information. So It’s not a good idea to use Info logging level in long loops.
Debug: This is the lowest logging level and it usually contains too much information that we often tend to filter this level out and use it only while debugging the application. It can contain any details that normally we don’t need to see in the logs.

Info or Debug

It is sometimes confusing whether to use Info or Debug. One of the common mistakes is that some developers use debug level for logging of dependencies. For example when a business object is using a data access object which needs to log some database operations, it may look like the data access object should use the debug level as it is serving a lower level functionality. However, when we choose the logging level for a component, we should consider that component in isolation. Any class can be a dependency for a higher level type and at the same time it may depend on other lower level components. When we read the logs we can filter only those components that we need to monitor or debug. So the lower level types should also have their own levels of Info and Debug. This is easier to handle when the application concerns are separated into individual classes. Otherwise you will have to consider a lower priority for some of the multiple concerns which are being addressed in a single class and choose Debug level for them to distinguish them from the main concerns of the same class.

Where to log

The easiest way to log is logging in between your code. But ideally we should consider that each object should have its own concern and logging is yet another concern. So we should separate it and put it somewhere else. Since logging is not limited to a specific layer of the application, it is a cross-cutting concern and two of the best practices to address such concerns are the Decorator design pattern and Aspect Oriented Programming (AOP). Both approach allow you to put your logging logic in separate classes and attach them to the type which is going to be logged. They allow you to log prior to, after, in case of success or in case of failure of a method. One may argue what if they need to log in the middle of a method. The answer is if each method is dealing with a single responsibility you won’t need that. The following code needs to be refactored so we won’t need to log in the middle of the Transfer method.

void Transfer(string source, string dest, Money amount)
{
    log.Info(&amp;amp;amp;amp;quot;Transferring...&amp;amp;amp;amp;quot;);
    log.Info(&amp;amp;amp;amp;quot;Debiting {0}...&amp;amp;amp;amp;quot;, source);
    ... // Debiting implementation
    log.Info(&amp;amp;amp;amp;quot;{0} debited.&amp;amp;amp;amp;quot;, source);
    log.Info(&amp;amp;amp;amp;quot;Crediting {0}...&amp;amp;amp;amp;quot;, dest);
    ... // Creddiing implementation
    log.Info(&amp;amp;amp;amp;quot;{0} debited.&amp;amp;amp;amp;quot;, dest);
    log.Info(&amp;amp;amp;amp;quot;Transfer finished.&amp;amp;amp;amp;quot;);
}

The followoing shows how to refactor the above code:

void Transfer(Account source, Account dest, Money amount)
{
    log.Info(&amp;amp;amp;amp;quot;Transferring...&amp;amp;amp;amp;quot;);
    source.Debit(amount);
    dest.Credit(amount);
    log.Info(&amp;amp;amp;amp;quot;Transfer finished.&amp;amp;amp;amp;quot;);
}

The Debit and Credit implementation should be moved to the Account object.

public class Account
{
    public string Number {get;set;}

    public void Debit(Money amount)
    {
        log.Info(&amp;amp;amp;amp;quot;Debiting {0}...&amp;amp;amp;amp;quot;, this.Number);
        ... // Debiting implementation
        log.Info(&amp;amp;amp;amp;quot;{0} debited.&amp;amp;amp;amp;quot;,  this.Number);
    }

    public void Credit(Money amount)
    {
        log.Info(&amp;amp;amp;amp;quot;Crediting {0}...&amp;amp;amp;amp;quot;,  this.Number);
        ... // Creddiing implementation
        log.Info(&amp;amp;amp;amp;quot;{0} debited.&amp;amp;amp;amp;quot;,  this.Number);
    }
}

Another moral of the above example is that the calling code should not take care of logging for the called method. Each method should log their own responsibilities only.