Chrome vs Firefox vs Edge vs Opera vs Brave vs Safari on Mac

I just did a quick comparison for my personal use between these browsers and the only few things I considered were:

  1. Performance: How smoothly the window resizes and the browser updates the layout.
  2. Memory consumption.
  3. CPU power
  • The latest version of all 6 browsers have been used
  • All of them have 3 tabs on the same websites open (AppStore, Microsoft, YouTube)
  • The comparison has been on MacBook Pro (15-inch, 2017)
  • It’s not a through test just enough for me to see which one suits me

Here is the results.

  • In terms of performance Safari, Opera and Firefox are the best. The other 3 are laggy when resizing

Memory:

It’a hard to tell which one is more efficient immediately as they all have side processes running but seems like Opera consumes least memory and Firefox uses the most.

Tracking CPU is even harder than memory but Firefox is always on top consuming the most CPU time and Safari is the most efficient one. Edge, Chrome and Brave are pretty much the same.

Conclusion

The Winner on Mac is Safari but it’s not a cross platform browser. The next Winner in terms of performance is Opera and then Firefox. Chrome, Brave, and Edge failed the first test so they are not even considered as options.

Update 1:

I used Opera for a week and it has these cons:

  • It has issue with location and google map couldn’t show my location OK.
  • It doesn’t have a good support and not listed as the supported browser for some websites.
  • Due to the limited support for import and export, it’s hard to switch from other browsers to Opera or switching back to them.
  • It doesn’t allow sending feedback.
  • The settings page is not well organized and it’s not easy to find the settings you need.

So switching to Firefox now.

Update 2:

Firefox has some rendering issues with the android app. Have to tradeoff between Chrome’s performance issue on Mac while resizing and Firefox rendering issue on the phone.
The other issue Firefox has is, it sometimes doesn’t update the saved passwords (e.g. on Okta) but Chrome does.

So switching back to Chrome.

Advertisement

Java Runnable, Callable, Supplier, etc.

Java doesn’t have the concept of delegate; instead, if you need a pointer to a function, you can create inline anonymous classes (or lambda expressions as of Java 8 ) which are implementations of certain interfaces designed for this propose (a.k.a functional interfaces as of Java 8). However as Java evolves, more of such interfaces are being added. While they may seem very similar and confusing, each of them has a unique characteristic which sets it apart from the others. You can map many of them to identical types in .NET. The following table lists some of the famous interfaces but there are many more. For example to support functions with two arguments, Java has another interface called BiFunction and if you need more arguments, you have to create your own interfaces. Remember that Java (up to version 10) doesn’t support identical class names if the only difference is number of type arguments. (In .NET there are various Func and Action types with up to 16 type arguments.)

Java Supports Exceptions Argument Returns .NET Equivalent
Callable  Yes  No  Yes  Func
Supplier  No  No  Yes  Func
Function<T,R>  No  Yes  Yes  Func<T,R>
Consumer  No  Yes  No  Action
Runnable  No  No  No  Action

API Security Checklist

  • Use HTTPS to protect sensitive data (authentication credentials, API Keys, etc.) in transit.
  • Authentication/Authorisation: Make sure the endpoints are protected with proper access levels.
  • GET methods are an easy target for attackers. So never perform an operation that changes the state of your application in a GET method.
  • Protect your API agains CSRF (Cross-Site Request Forgery) attacks.
  • Make sure your API is not vulnerable to XSS (Cross-Site Scripting) attacks.
  • Sign JWT (JSON Web Tokens) securely preferably using secrets.
  • Use API Keys for every request.
  • Treat Management Endpoints differently than normal ones, by enforcing stronger security policies (e.g. multi-factor authentication.
  • Handle exceptions decently so that technical error details are not exposed to clients.
  • Use SOP (Same-Origin Policy) and disable CORS if it’s not needed. When enabling CORS, be as specific as possible.
  • Do not put any sensitive information in the URL params as they can be logged by servers. Put them in the request header or body.
  • When setting cookies, use Secure and HttpOnly. Also restrict the scope of cookies.
  • Any input or data being imported may eventually end up in users’s browsers as part of an HTML page and you don’t want to send a malicious script to the them. Validating input and imported data is one of the ways to prevent clickjacking, XSS or stored CSRF flaws.
  • Any input or data being imported may also end up being inserted into your database, so make sure your application is protected against SQL Injection attacks.
  • Set response Content-Type header properly to mach the response MIME type and disable MIME type sniffing (nosniff).

Azure Service Bus Messaging

Microsoft Azure Service Bus supports two distinct messaging patterns: Azure Relay and Service Bus Brokered Messaging. This article is an introduction to Brokered Messaging.
Brokered Messaging can decouple the communication between applications by introducing an asynchronous messaging system. The sender applications put the requests on a queue and the receivers will pick them up and process them whenever they are available. This way if the receiver application is temporarily busy or unavailable the sender application doesn’t fail or get blocked.

Queues

A queue is an entity with a head and a tail. Senders put the messages on the head and receivers pick them up from the tail.

sb-queues-08

Once a message is processed by a receiver it will no longer be available on the queue for other receivers. When there are multiple message receivers, the message load is distributed between them in a load balancing manner.

Topics and Subscriptions

Topics and Subscriptions can be thought as queues with one head but multiple tails. The head is called a Topic and each tail is called a Subscription. When a sender sends a message to a Topic, Service Bus creates multiple copies of it for each subscription. If there are several receivers interested in receiving the message, they can each listen on a different subscription and get a distinct copy of the message. However, if they listen on the same subscription, only one of them will be able to receive the message.

sb-topics-01

The Subscriptions can specify filters. This way each subscription only exposes the messages that match its filter.

Service Bus Namespace

A namespace is the way to categorize the queues and topics. You cannot create any service bus entities unless you have created a namespace first. A company may want to have different namespaces for different projects or teams. Creating a namespace is easy, you just need to go to Azure portal and follow the instructions. After the namespace is created it can be referred to, using its connection strings.

servicebus_overview

Creating Service Bus entities (Queues, Topics and Subscriptions) is also as easy as creating a Namespace. Just follow the instructions on Azure portal.

You can also manage Service Bus entities programmatically using the NamespaceManager class.

Client API

In your project install WindowsAzure.ServiceBus Nuget package to get the Service Bus client library. Here are a few code snippets to get the basic things done:

Basic Queues operations

// Creating a Queue client
var client = QueueClient.CreateFromConnectionString(connectionString, queueName);

// Creating a message
var message = new BrokeredMessage("Hello");

// Sending a message
client.Send(message)

// Receiving a message
client.OnMessage(message =>
{
    // Processing the message here...
});

Basic Topics operations

// Creating a Topic client
var topicClient = TopicClient.CreateFromConnectionString(connectionString, topicName);

// Sending a message
topicClient.Send(message)

// Create a Subscription client
var subscriptionClient = SubscriptionClient.CreateFromConnectionString(connectionString, topicName, subscriptionName);

// Receiving a message
subscriptionClient.OnMessage(message =>
{
    // Processing the message here...
});

Messaging Factory

Another way of creating messaging clients (QueueClient, TopicClient or SubscriptionClient) is using a MessagingFactory. It allows us to create the clients in a more abstracted way and thinking of them as senders and receivers. It is more flexible and makes it easier to change the messaging strategy without having to update the code. For example, if your application is currently using Queues and you decide to switch to Topics/Subscriptions, all you need to do is updating your endpoints if you have used messaging factory to create the clients.

var factory = MessagingFactory.CreateFromConnectionString(ConnectionString);

var messageReceiver = factory.CreateMessageReceiver("entityPath");

var messageSender= factory.CreateMessageSender("entityPath");

Message Lifetime

When a message receiver is created it specifies a Receive Mode. There are two receive modes:

  • Receive and Delete: In this mode as soon as the receiver picks up the message, the message gets deleted from the queue. If the receiver crashes before being able to process the message, it will be lost. This At-Most-Once delivery is guaranteed.
  • PeekLock: This is the default mode and guarantees At-Least-Once delivery. When the receiver picks up the message in this mode, the service bus puts a lock on the message and hides it from other receivers, but the message still exists on the queue until one of the client completes processing the message. If the client crashes before that, the message will be visible on the queue again after the lock expires. The default lock duration is 30 seconds and it can be set on the receiver (queue/subscription).

After the message receiver is done with the message, it should notify the service bus by calling one of the following methods:

  • Complete(): The message was successfully processed and it should be deleted from the queue.
  • Abandon(): The receiver doesn’t know how to process the message and it returns the message to the queue to make it available for other receivers.
  • Differ(): The receiver doesn’t want to process the message just yet, so the message will be returned to the queue but it will not be delivered to any receiver unless they explicitly request for it by its sequence number.
  • DeadLetter(): The receiver has encountered an error processing the message and marks it as a dead-letter. Dead-letters are sent to a sub-queue within the same queue.

The following code shows how to read a message from the dead-letter queue:

var factory = MessagingFactory.CreateFromConnectionString(ConnectionString);
var deadletter SubscriptionClient.FormatDeadLetterPath(TopicName, SubscriptionName);
var deadletterClient = factory.CreateMessageReceiver(deadletter);
deadletterClient.OnMessage(message =>
{
    // Process dead-letter message here
});

A queue or subscription can be configured to move expired or erroneous messages to a dead-letter queue too.
servicebus_edit_subscription

The OnMessage method automatically marks any received messages completed. If the client is going to take care if this itself, the AutoComplete should be disabled first:

messageReceiver.OnMessage(message =>
{
    try
    {
        // if message processed successfully
        message.Complete();
    }
    catch (Exception exception)
    {
       // If message has an error
        message.DeadLetter();
    }

}, new OnMessageOptions
{
    AutoComplete = false
});

The longest a message can live on a queue before it is processed is determined by its time-to-live afterwhich the message gets deleted automatically. Time-to-live can be specified on the queue, topic, subscription or the message itself. A message can only live as long as the minimum time-to-live specified on its queue, topic,
subscription or the message itself.

var message = new BrokeredMessage
{
    TimeToLive = TimeSpan.FromMinutes(10)
};

Duplicate Detection

When sending a message, if an error happens and the client cannot send the message, it usually retries sending and this may lead to having duplicate messages on the system. A client also re-sends the message if it doesn’t receive any acknowledgement from the queue. Enabling duplicate detection keeps track of the MessageId property of all messages. If a new message with identical MessageId as an existing one is detected within a time window (duplicate detection history), it will be ignored and dropped.

Duplication detection can be enabled on Queue or Topic when creating them either in the Azure portal or programmatically by setting the RequiresDuplicateDetection property. The duplicate detection history window can also be set on a Queue and Topic. In the code set DuplicateDetectionHistoryTimeWindow on a QueueDescription or TopicDescription.
The duplicate detection window should be kept as small as feasible, because large window size means keeping track of more message-Ids and it impacts the throughput.

Filters, Rules and Actions

A subscription can be created with a filter so that it delivers only a subset of messages to subscription clients. A CorrelationFilter allows for a basic filtering based on the message properties like MessageId, CorrelationId, etc.

namespaceManager.CreateSubscription(topicPath, subscriptionPath,
    new CorrelationFilter { Label = "red", CorrelationId = "high"});

You can also specify a SqlFilter which allows for Sql-like expressions:

namespaceManager.CreateSubscription(topicPath, subscriptionPath,
    new SqlFilter("color = 'blue' AND quantity = 10"));

When creating a subscription, if a filter is not explicitly specified, the subscription will be created by default with a TrueFilter which means the subscription accepts all messages.
A subscription can also be created with Rule. A rule executes an Action on the messages matching the filter.

namespaceManager.CreateSubscription(topicPath, subscriptionPath,
new RuleDescription
{
    Name = "RedRule",
    Filter = new SqlFilter("color = 'red'"),
    Action = new SqlRuleAction(
        "SET quantity = quantity / 2;" +
        "REMOVE priority;" +
        "SET sys.CorrelationId = 'low';")
});

In the above example, the rule will run a SqlAction on any messages whose ‘color’ property is ‘red’ and as a result of the action, two properties of the message will be set and one property will be removed. Read more here.

Shared Access Policies

Every messaging entity is associated with a set of access policies that determine what you can do with entity. When an entity is created it comes with a default policy named RootManageSharedAccessKey which gives you full access (Manage, Send, Receive) to the entity.
shared-access
Each policy has a primary key and a secondary key both of which can be used to connect to the entity. If a key is compromised, you should regenerate it and update all clients to use the new key. The reason there two keys is that you switch to the other key before regenerating one to avoid a downtime in your system.
edit-policy
It is not recommended to use your Root manage accessed key everywhere in your system. You should create custom policies with reasonable access levels (Send, Listen or Manage) and use it in clients.

Auto-forwarding

You can use auto-forwarding to scale out an individual topic. Service Bus limits the number of subscriptions on a given topic to 2,000. You can accommodate additional subscriptions by creating second-level topics. Note that even if you are not bound by the Service Bus limitation on the number of subscriptions, adding a second level of topics can improve the overall throughput of your topic.

Auto-forwarding scenario

You can enable auto-forwarding by setting the QueueDescription.ForwardTo or SubscriptionDescription.ForwardTo properties on the source queue/subscription:

var srcSubscription = new SubscriptionDescription (srcTopic, srcSubscriptionName);
srcSubscription.ForwardTo = destTopic;
namespaceManager.CreateSubscription(srcSubscription));

The destination entity must exist at the time the source entity is created. Lear more…


References:
Microsoft Azure service bus messaging documentation
Microsoft Azure Service brokered messaging course on Pluralsight
Southworks by Jorge Rowies
Cloudcasts by Alan Smith
Github samples

 

Repository Pattern

Definition

Repository mediates between the domain and data mapping layers, acting like an in-memory domain object collection. Client objects construct query specifications declaratively and submit them to Repository for satisfaction. Objects can be added to and removed from the Repository, as they can from a simple collection of objects, and the mapping code encapsulated by the Repository will carry out the appropriate operations behind the scenes.

Objectives

Use the Repository pattern to achieve one or more of the following objectives:

  • You want to maximize the amount of code that can be tested with automation and to isolate the data layer to support unit testing.
  • You access the data source from many locations and want to apply centrally managed, consistent access rules and logic.
  • You want to implement and centralize a caching strategy for the data source.
  • You want to improve the code’s maintainability and readability by separating business logic from data or service access logic.
  • You want to use business entities that are strongly typed so that you can identify problems at compile time instead of at run time.
  • You want to associate a behavior with the related data. For example, you want to calculate fields or enforce complex relationships or business rules between the data elements within an entity.
  • You want to apply a domain model to simplify complex business logic.

 

Best Practices

  1. Do not use IQueryable<T> as the return type of query methods. Because it creates tight coupling and you get no benefit over using the OR/M directly. Use more generic types like IEnumerable<T>.
  2. Create repositories for domain aggregates only not for every single entity. If you are not using Aggregates in a Bounded Context, the repository pattern may be less useful.
  3. Creating base types for repositories is not recommended. If it is needed make sure not breaking the Interface Segregation Principle.
  4. It is not recommended to create generic repositories, make sure you have a good reason if you want to create one.
  5. Repositories must return domain entities which are usable in the domain services without having to be converted, so any data contracts must be converted to domain entities before the repository returns them.
  6. Apart from CRUD operations, repositories can also have identity generator, count methods or finder methods.
  7. A repository should only have the minimum required operations. It doesn’t even have to include all the four of CRUD operations. It can be as simple as a repository with only one operation.
  8. It is OK to have bulk methods for adding or removing multiple items to or from the repository.
  9. If you find that you must create many finder methods supporting use case optimal queries on multiple Repositories, it’s probably a code smell. First of all, this situation could be an indication that you’ve misjudged Aggregate boundaries and overlooked the opportunity to design one or more Aggregates of different types. However, if you encounter this situation and your analysis indicates that your Aggregate boundaries are well designed, this could point to the need to consider using CQRS.
  10. Do not design a Repository to provide access to parts that the Aggregate Root would not otherwise allow access to by way of navigation.
  11. Place the repository interface definition in the same Module (assembly) as the Aggregate type that it stores. The implementation class goes in a separate Module. The following image shows how the dependencies should be 
  12. Generally speaking, there is a one-to-one relationship between an Aggregate type and a Repository.
  13. If the aggregate includes large members, for example, big lists of items, you can consider lazy loading.
  14. The only responsibility of a repository is data persistence, so no business (domain) logic should be added to a repository. The operations are CRUD only. This also applies to the name of the methods.

References

Successful Backend Architecture

This is a clean architecture and is a simplified version of onion architecture. It is also conforming to the principals of Hexagonal Architecture.

Layers

In the following diagram the arrows show the direction of the dependencies. Any outer layer can directly call ANY inner layer. So the yellow layer (Adapters layer) can rely on all of the other layers. The application layer can only have dependency on the domain layer. And the domain layer should NOT have any references to the other layers. Any communication with the outside word happens in the Adapters layer.

 

 Layers_overview

Adapters layer

This layer makes it possible for the application to communicate with the outside word. In order to interact with external components like databases, web, files system, etc. the application needs a layer which can convert its internal objects to something which is understandable by the external components and vice versa. For example a Data Access is needed to let the application talk to a database; or to be able to expose the application’s functionality through the Web, we need to create a Web Service which makes use of the .Net Framework web technologies. These components are called Adapters and they should reside in the Adapters layer. The inner layers or other adapters cannot have dependencies on this layer.

 Adapters

As the above diagram shows, there are two groups of external components: driving actors and driven actors (external resources). A Driving actor is an actor that drives the application (take it out of quiescent state to perform one of its advertised functions). End users, automated tests, or remote clients are a few examples of driving actors. Whereas a driven actor is one that the application drives, either to get answers from or to merely notify. Database, files system or remote web services are some examples of driven actors (a.k.a external resources). The distinction between “driving” and “driven” lies in who triggers or is in charge of the conversation. These two groups of the external components require two groups of adapters: Driving adapters and Driven adapters.

Driving Adapters

These are the adapters that expose the applications functionality to the outside world. In other words it allows the driving actors to drive the application. The other layers of the application never need to see this layer, however the driving adapters need to have dependency on them. The following list encompasses the driving adapters:

  • Web Service: Regardless of the web service technology, all of the web services go to this layer. Web API, WCF, or Soap (ASMX). This layer must not have any business or application logic and should be as thin as possible. The consumer of this layer is remote clients.
  • Tests: They are another residents of this layer. They call the functions of other layers and allow the test runners drive the application for test purposes. So the test runners are the driving actors of this layer.
  • Presentation: This layer can have the adapters that translates the user’s keystrokes and gestures into data understandable for the application and visualize the application’s output to the user. The actor who drives this layer is the end user.

Note

The driving adapters are where the composition root of the entire application should go. Any service registration happens in this layer.

Driven Adapters (Infrastructure)

This layer has the concrete implementation of the dependencies of the other layers. Infrastructure is any code that is a commodity and does not give your application a competitive advantage. This code is most likely to change frequently as the software goes through years of maintenance. That’s why no other layer should have any dependency on this. Any dependency to this layer must be through abstractions. For example if the software needs a repository, it should use an abstract repository (i.e. an interface) and the implementation of the repository goes to this layer. We rely heavily on the Dependency Inversion principle. The application core needs an implementation of core interfaces, and since those implementing classes reside at the edges of the application, we need some mechanism for injecting that code at runtime.

  • Data Access: This layer has the implementation of the repositories. They can persist and retrieve the domain objects in some storage. Depending on what storage to use, we may need adapters (different implementation for the repository).
  • Service Proxy: Whenever we add a service reference to the application, a proxy component is generated which is an adapter between the application and the remote service. It makes it possible to send and receive the messages throw the web. Depending on the messaging technology, we may need different adapters here. Be it SOAP, TCP or even a message queue, the application should not need to change because of a change in the underlying messaging technology.
  • File Access is another example of infrastructure. It makes it possible to operate the file system regardless of the contents of the files.

Note: The adapters in the infrastructure layer should be substituted for test doubles when testing the application.

Layers_Details

 

Application layer (Orchestrator)

This layer contains the application services which are single responsibility classes that do the orchestration of the application and deal with application logic. These services are different from the web services that we have in the Web Service Layer. Please be advised that this layer is not meant to have any domain (a.k.a business) logic. Domain logic should go to the Domain Layer.

  • We define the DTO classes (request/response) in this layer.
  • Application services are also responsible for composing the response objects.
  • As a naming convention suffix them with the word “ApplicationService” or just “Service”. But make it consistent.
  • Application services can consume repositories and other domain services.
  • Although it does’t see the infrastructure layer, it can see its abstractions.

Domain layer (Business layer)

This layer will have all of the domain models: Entities, values objects, domain events, and the domain exceptions. The abstraction of the repositories also go to this layer. The business rules should be applied in this layer.

Why this architecture

It is a clean architecture that means it can produce a system which is:

  1. Independent of Frameworks. The architecture does not depend on the existence of some library of feature laden software. This allows you to use such frameworks as tools, rather than having to cram your system into their limited constraints.
  2. Testable. The business rules can be tested without the UI, Database, Web Server, or any other external element.
  3. Independent of UI. The UI can change easily, without changing the rest of the system. A Web UI could be replaced with a console UI, for example, without changing the business rules.
  4. Independent of Database. You can swap out Oracle or SQL Server, for Mongo, BigTable, CouchDB, or something else. Your business rules are not bound to the database.
  5. Independent of any external agency. In fact your business rules simply don’t know anything at all about the outside world.

The Dependency Rule

The concentric circles represent different areas of software. In general, the further in you go, the higher level the software becomes. The outer circles are mechanisms. The inner circles are policies.

The overriding rule that makes this architecture work is The Dependency Rule. This rule says that source code dependencies can only point inwards. Nothing in an inner circle can know anything at all about something in an outer circle. In particular, the name of something declared in an outer circle must not be mentioned by the code in the an inner circle. That includes, functions, classes. variables, or any other named software entity.

By the same token, data formats used in an outer circle should not be used by an inner circle, especially if those formats are generate by a framework in an outer circle. We don’t want anything in an outer circle to impact the inner circles.

Key tenets

The following tenets are those of the onion architecture. Our architecture should adhere to all of them as it is a simplified version of onion:

  1. The application is built around an independent object model
  2. Inner layers define interfaces. Outer layers implement interfaces
  3. Direction of coupling is toward the centre
  4. All application core code can be compiled and run separate from infrastructure

References


Hexagonal Architecture: http://alistair.cockburn.us/Hexagonal+architectur
Onion Architecture: http://jeffreypalermo.com/blog/the-onion-architecture-part-1/
Clean Architecture: http://blog.8thlight.com/uncle-bob/2012/08/13/the-clean-architecture.html

Dynamic Deserialization

Binary deserialization fails with a SerializationException saying ‘Unable to find assembly’ if you try to deserialize an object whose type is not referenced in the current application domain.

The easiest way to resolve this is to reference the assembly containing the required type. This way the BinaryFormatter will return an object of the same type and all you need to do is a simple cast:

var formatter = new BinaryFormatter();
var obj = (SourceType)formatter.Deserialize(stream);

However, this approach may not be possible for many reasons: You may not have access to the source assembly, the assembly may be targeting a different platform, etc. In such cases the solution is to create proxy types which have similar structure to the source type. Your BinaryFormatter then needs to be configured to deserialize the stream to a proxy type rather than the original one. This is done by creating a custom SerializationBinder:

sealed class MySerializationBinder : SerializationBinder
{
    public override Type BindToType(string assemblyName, string typeName)
    {
        return Type.GetType(String.Format(&quot;{0}, {1}&quot;, typeName, PROXY_ASSEMBLY_NAME);
    }
}

You then need to assign it to the formatter:

var formatter = new BinaryFormatter
{
    Binder = new MySerializationBinder ()
};

But there are still cases where you cannot even create proxy assemblies because the structure of the original types may be completely unknown to you. In such cases you should create a Descriptor Proxy which can represent a deserialized object in a dynamic way:

[Serializable]
class ObjectDescriptor : ISerializable
{
    private readonly SerializationInfoEnumerator enumerator;

    [SecurityPermission(SecurityAction.Demand, SerializationFormatter = true)]
    void ISerializable.GetObjectData(SerializationInfo info, StreamingContext context)
    {
    }

    [SecurityPermission(SecurityAction.Demand, SerializationFormatter = true)]
    private ObjectDescriptor(SerializationInfo info, StreamingContext context)
    {
        this.enumerator = info.GetEnumerator();
    }

    public IEnumerable&lt;SerializationEntry&gt; GetMembers()
    {
        this.enumerator.Reset();
        while (this.enumerator.MoveNext())
        {
            yield return this.enumerator.Current;
        }
    }
}

The descriptor object can then be used to create a dynamic object:

public dynamic CreateDynamicObject(ObjectDescriptor descriptor)
{
    var nameRegex = new Regex(&quot;&lt;(.+)&gt;&quot;);

    IDictionary&lt;string, object&gt; expando = new ExpandoObject();

    foreach (SerializationEntry member in descriptor.GetMembers())
    {
        string memberName = nameRegex.IsMatch(member.Name) ? nameRegex.Match(member.Name).Groups[1].Value : member.Name;
        expando[memberName] = member.Value is ObjectDescriptor complexMember ? CreateDynamicObject(complexMember) : member.Value;
    }

    return (ExpandoObject)expando;
}

Find a complete sample project implementing the above solution at this GitHub repository.

Change The Author of Git Commits

  1. Identify one revision before the commit you need to update. In the following example, the last two commits are going to be updated.
    CommitLogs
  2. Run git rebase command with the identified commit number:
    git rebase -i 2778751
  3. In the editor, for each commit you need to update, change the word pick to edit. (Make sure you know how to use the Vim editor)
  4. Run the following command to set the author to the current configured git user:
    git commit --amend --reset-author --no-edit
    Or use the following to set it to a custom author:
    git commit --amend --no-edit --author="Author Name <email@address.com>"
  5. Run git rebase --continue
  6. Repeat step 4 and 5 until all of the commits are updated.

Install .NET Core on Ubuntu 15.10

Currently .NET Core is only supported for Ubuntu 14.04 and when you try installing it on Ubuntu 15.10 you get the following error:

Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
 dotnet : Depends: libicu52 (>= 52~m1-1~) but it is not installable
E: Unable to correct problems, you have held broken packages.

As a workaround you can download and install libicu52 manually before installing dotnet .