Skip to content

Category: Software Development

DI is not IoC

Dependency Injection (DI) helps to enable Inversion of Control (IoC), but DI itself is not IoC because truly moving the locus of control outward requires an architecture that wraps and invokes the DI. An example of this would be ASP.NET MVC which uses DI to instantiate the controllers and other objects needed to process web requests. That is IoC, but the libraries that implement DI (“containers”) are not themselves IoC containers they are DI containers. Calling them IoC containers is inaccurate.

Leave a Comment

The Service Locator Pattern

Developers keep referring to Service Locator as an anti-pattern. If that is the case then ASP.NET MVC and every IoC container I’ve ever seen must be wrong because they use it.

The interface for accessing an IoC container is an implementation of the Service Locator pattern. You’re asking for some particular interface (aka a service) and its giving you back an instance (if it can).

Under the hood ASP.NET MVC uses a service locator (which almost always happens to be an IoC container) to new-up Controllers for handling incoming HTTP requests.

Service Locators can certainly be used incorrectly or where they should not, but they are not an anti-pattern. They are a specific tool in what should be an immense toolbox for solving certain types of problems. Sometimes they are the best choice. Sometimes they are a terrible choice. But the pattern itself is not at fault.

For more read Service Locator vs Dependency-Injection which goes into more detail and is also a very fun read. I’d mention the author’s name, but I can’t seen to find a name associated with the blog.

Leave a Comment

Original Coder Libraries w/Layers Architecture

I’ve just pushed a new version of the Original Coder Libraries up to GitHub that includes the first draft of the Layers library and architecture.

The libraries are hosted on GitHub: The Original Coder Libraries

This push includes the first version of the Layers architectural library I’ve been working on. It is based on similar architectures I’ve used on a few different projects in the past which proved to be very helpful. From a features and maturity standpoint this could probably be considered the 3rd incarnation (once they are completed, still in alpha).

The library makes it incredibly easy and efficient to build software systems using layers. Especially systems that deal with data that need CRUD (Create, Read, Update and Delete) operations. Using the library it will be possible to implement a full set of CRUD endpoints for a resource (an entity / database table / or the like) in about 100 lines of code.

I’ve included a project named LayerApiMockup that provides an example of what setting up and implementing will be like with the library. It still needs a bit of work and I need to add the add-on libraries for implementing specific technologies (Entity Framework, ASP.NET MVC, etc) but this is a good start.

1 Comment

Senior Developers & S.O.L.I.D. Principals

The S.O.L.I.D. principals didn’t exist when I was learning to program and by the time I had heard of them I had been working at a senior level for years. Once I heard about them I had a quick look, thought they were all pretty obvious and paid no more attention.

Let me clarify that. The principals that make up S.O.L.I.D. are pretty basic stuff. Junior developers will get them wrong all the time. Mid-level developers will mostly get them right but will still make mistakes. By the time a developer starts to transition into a senior role they should always be applying these sorts of basic principals correctly and mostly automatically. A developer who has been working at the senior level for a few years should never even need to think about such things consciously.

Just like people don’t consciously think about how to walk/run, how to use punctuation when writing or how to park their car after they have been doing any of those for a few years. Experienced authors and drivers don’t think about any of the basics of doing those tasks. Which is why people can commute from home to work and not even be able to remember the drive. For this same reason the S.O.L.I.D. principals should be being applied at a subconscious level by senior developers.

Which just recently has become a pain for me. Now that S.O.L.I.D. is all the rage every interviewer seems compelled to drill developers on all of the details. Try to remember all of the low-level spelling and grammar rules you use when writing. If you’ve been out of school for at least a few years I bet you can’t remember most, if any, of them. You don’t need to anymore, just like I haven’t needed to think on such a basic level for years when programming. Those basic principals are automatic and forcing them back into the conscious level isn’t a benefit and doesn’t improve anything.

Interviews just a couple/few years ago weren’t asking such basic questions during senior level interviews. These types of basic interviewing questions are not going to land good senior developers. Someone who is book smart or spent time cramming before the interview could answer those even if they weren’t a developer. But the more in depth questions that can’t be crammed for and require real (senior level) experience not just book learning aren’t getting asked as often (or at all) during senior level interviews now.

Leave a Comment

Initial release of Original Coder Libraries!

I’ve created the Original-Coder-Libraries repository on GitHub and uploaded some source code to get things started! They are licensed under the GNU LGPL v3.

The libraries currently contain approximately 3,500 lines of C# according to code metrics. This is a tiny fraction of what I have in my personal libraries and I’ll be adding more in the future.

The OriginalCoder.Common library includes:

  • Abstract base class for implementing IDisposable
  • Abstract base class for implementing IDiposable that also automatically cleans up registered children.
  • Exception classes for use in Original Coder libraries
  • Comprehensive set of extension methods for reading and writing XML using Linq to XML
  • Interfaces and classes for returning messages and operation results (mostly intended for use with Web APIs)
  • Extension methods for working with enumerations
  • Centralized application configuration for working with DateTimes (such as which formats to use for user display vs data storage).
  • Many useful DateTime extension methods
  • Extension methods for working with Type
  • Extension methods for calculating a cryptographic hash of a disk file
  • Standard interfaces for defining common properties on classes (Name, Description, Summary, WhenCreated, WhenDeleted, etc).
  • Extension methods for working with standard object property interfaces.

The OriginalCoder.Data library includes:

  • Standard interfaces for defining common data properties on classes (WhenCreated, WhenUpdated, WhenDeleted, IsActive).
  • Extension methods for working with standard object property interfaces.
  • Standard interfaces for defining unique key properties on classes (Id, Uid, Key)
  • Extension methods for working with standard key interfaces.

Repository: https://github.com/TheOriginalCoder/Original-Coder-Libraries

Leave a Comment

My Definitions for the S.O.L.I.D. Principals

The 5 principals which make this up were bundled together as the S.O.L.I.D. principals in the early 2000s. These describe basic concepts that are required to write good code. So it is very important for novice, junior and mid-level programmers to put real thought into these and get good at applying the concepts correctly and consistently. The better a programmer gets at these the less time they should need to spend thinking about them.

As I mentioned these are very important basic programming concepts that need to be learned, mastered and (eventually) internalized by software engineers. But I personally think the language often used to define and explain the underlying concepts is a bit cryptic or overly complex. Below is my attempt at conveying these concepts.

(S) Single Responsibility Principal

Don’t mix multiple (especially unrelated) capabilities / functionalities together in one class. Make separate, more narrowly defined classes for each distinct capability. Classes that have a more defined / narrow purpose are easier to learn, apply, maintain and also reuse. Note that this applies to any division of code (methods, classes, interfaces, modules, libraries, etc).

(O) Open Closed Principal

Great concept, not so great name.

Once functionality has been crated and is in use don’t modify it in a way that would break existing code! At the same time there should also be a way to extend or alter the behavior of the previously written code (without rewriting or copy/paste) that won’t break code that uses it.

A good way to do this is through the use of abstract base classes and inheritance. The core functionality for performing a task should be written into a base class. A concrete descendant class will inherit that functionality and allow it to be used. That concrete class should not be changed in a way that breaks existing once in use. But new concrete classes and possibly a new level of abstract class can be added that extends or changes the functionality without impacting existing concrete classes.

(L) Liskov Substitution Principal

A rather complicated way of defining a good concept.

Descendant classes should not break either the explicit nor implicit contract of the parent class. Descendant classes should be implemented in a manner that allows them to be dropped in as replacements to the parent class without requiring any code that expects the parent class to be changed. The key here is that it is more than just signature level compatibility (list of methods, parameters, and types). The expected behavior must also remain the same!

Note that this isn’t limited to classes. This concept should be applied anyplace where substitution is allowed by the programming language or system. Which means this also applies to interfaces. This would also apply to multiple DLLs that expose the same method signatures (which is sometimes used to implement plug-ins or extensions). Anyplace substitution is possible care should be taken to ensure the behaviors are consistent.

(I) Interface Segregation Principal

This is the same general concept that underlies the Single Responsibility Principal but applied specifically to interfaces. Its the same underlying concept, it doesn’t need 2 separate principals.

(D) Dependency Inversion Principal

This is the least obvious of the bunch, possibly because this way of thinking and the frameworks / technologies needed to support it are more recent. Or maybe because this principal is emergent and not stand alone; It becomes possible due to other principals.

The idea here is that classes which require instances of other classes (or interfaces) to perform their work should not instantiate specific concrete implementations within their code. This has the effect of embedding the decision as to which concrete implementation to use in a place where it is difficult to change. This also typically results in these choices being embedded in many different places throughout a software system.

The concept behind the Open/Close principal encourages inheritance and abstractions. The concept behind the Liskov principal states that all implementations of something that allows substitution (such as classes and interfaces) must be interchangeable. If we’re applying both of those concepts consistently its a shame to make substitutions difficult by hard-coding and embedding those decisions all over the place in the code.

Bingo! And that’s why Dependency Inversion sprung up. To standardize and centralize the ability to use substitution in a software system.

Currently the most common preference for implementing this capability is the Dependency Injection pattern where concrete instances are passed into a class via its constructor. But that is not the only option, any pattern that extracts and centralizes these decisions could be used. One such alternative is the Service Locator pattern which, in some cases, can be preferable to Dependency Injection.

It is worth noting that the Dependency Inversion principal is the only one of the bunch that isn’t universally applicable. Not all applications need DI. I would likely argue that all large or complex systems probably should use it. But small systems or one-offs that aren’t expecting to be maintained probably don’t.

Leave a Comment

Software Architecture is Layers of Goodness

The idea of software systems having layers has been around for quite awhile and the terminology is very helpful when used properly.

This article does not intend to cover the myriad of reasons why a software architect would choose to use, or not to use, layers in a software system. My off the cuff thought is any system that has more than 3 developers or more than 50,000 lines of code or would probably benefit from layers to some degree. Layers are certainly not needed for all software systems, but they are certainly helpful in some systems. Even in smaller systems they can be a useful conceptual idea, communication tool or handy for breaking up work by skill set.

Click to enlarge diagram

A layer is all of the classes, types and related artifacts that are used to perform a particular type of function in a software system. The code that makes up layers is mostly application specific (written by the application developers specifically for use in that one system). Layer code does not mingle, all application specific code that exists in a layer exists in one and only one layer. Generic code (such as List, cryptography functions, string functions, etc) does not fall into a layer because it isn’t application specific. Ideally code that isn’t application specific and doesn’t fall into a layer should be written as reusable code and put in a library. Failing that the code that makes up layers in a system should at least exist in in separate and obviously named namespaces with non-layered code in different name spaces.

The most common layers of functionality used in software systems are the presentation layer, the service layer and the repository layer. If a systems is intended for use by other systems (not an end user) then an API layer would take the place of the presentation layer. Each of these common layers can be referred to by different names depending on who you talk to. Just like a rose, the name isn’t important because the purpose of the layer remains the same.

Click to enlarge diagram
  • Presentation Layer, User Interface Layer, GUI Layer and Web Client Layer all refer to the same functionality of interacting with a person.
  • API Layer, Web API Layer and RESTful API Layer all refer to the same functionality of interacting with other external systems via a defined API.
  • Service Layer, Business Layer, Logic Layer and Business Logic Layer all refer to the same functionality of implementing the business rules, logic and complex processing within the software system.
  • Repository Layer and Data Access Layer both refer to the same functionality of reading and writing data to/from persistent storage.

The presentation / user interface layer is responsible for all user interaction. Business logic/rules and code that reads and writes data from persistent storage should not exist in this layer. The purpose of this layer is limited to displaying information and interacting with the user.

An API / Web API layer (instead of a user interface layer) for software systems that expose their functionality for use by other systems to use instead of a user interface. Functionally this layer takes the same role as the user interface layer would, it is responsible for all interfacing with the client.

The repository / data access layer is responsible for reading and writing data to/from persistent storage. Most often this is a relational database but it can be any type of structured storage (files, XML, no sql databases, etc).

The service / business logic layer is most of the code that exists between the presentation and repository layers. It is where the business logic and rules are implemented and where complex processing occurs. Business rules and logic should not be coded into other layers.

There can also be adapter layers positioned between the other layers in a system. For example an adapter layer between a Web API layer and a service layer that converts data from Data Transfer Objects (used in API) to the structures used by the service layer and then back into DTOs when data is returned from the service layer.

Systems can also include a proxy layer that replaces the service and repository layers for capabilities that are implemented by an external system. For example a complex web application system where the front end is implemented by one set of servers which use proxies to call into the Web API that is exposed by another set servers which contain the business rules, logic, processing and data access code.

Now that we have the fundamentals out of the way we can discuss why the concept of layers in software systems is important and still very much relevant today.

Click to enlarge diagram

Without the concept of layers software systems would be big collections of objects that interact with each other. This would be troublesome because not all objects in a software system should be allowed to talk to each other. Layers are a top down design, code executing in a layer can call other code in the same layer or in the next layer lower down. But code can not call into layers above it and can’t skip over layers when calling downward. This is why the concept of layers is important in software systems!

Classes in the presentation/API layer should never directly talk to classes in the repository layer. Likewise classes in in the service layer should never call classes in the presentation or API layer. Classes in the data access layer should never call presentation/API or service layer classes. If we were to throw out the concept of layers and view software systems as a collection of classes that call each other then these very important concepts of separation would be lost.

A good system architecture codifies these layer concepts and makes them easier and more accessible for the application programmer while also reducing the likelihood of bad code that violates these principals. How this can be done in architecture is a difficult concept to express concisely and would require, at the very least, a sizable article of its own. I am working on transforming some of my existing personal library code into open source libraries that I’ll publish on NuGet in the foreseeable future. Those libraries will contain a very nice structured architecture I’ve used numerous times to implement these layer concepts in systems. If you’re interested in that keep an eye on my blog.

Leave a Comment

My Natural Environment

This is where I do my software development (unless I’m on-site in a cubicle somewhere).

This is my natural Environment (my home office in development mode)

I love working in my home office and find it to be an extremely effective and efficient environment for writing software!

  • Fast Intel i7 CPU at 4 GHz
  • 32 GB of RAM
  • 1 TB SSD for boot & temporary storage
  • Dual 4TB hard drives in RAID-1 for data storage
  • Nvidia GeForce GTX 1080 for driving my main 4K monitor
  • Nvidia GeForce GTX 1050 Ti for driving my 2 additional monitors
  • Main monitor is 55″ 4K which allows me to see A LOT OF CODE at once
  • Two secondary monitors for references or other information
  • Indirect, subdued lighting so the focus is the monitors
  • Music with a strong beat (electronic, industrial or metal) for motivation

Though normally the computer is sitting on the right side of the desk, not on top. My motherboard failed last week and I had to replace it. Historically I’ve preferred AMD based systems but decided to give Intel a try for the last upgrade. That has gone very poorly resulting in yet another hardware failure (see my previous post AMD vs Intel) .

I’m waiting for the 3rd generation of AMD Ryzen Threadripper processors to become available (hopefully later this year) for my next upgrade. Those are a game changer with all of their extra I/O bandwidth. I’ll also upgrade to a M.2 PCIe main drive and 64 GB of RAM at the same time. Should be nice.

Leave a Comment

Benchmarking Multi-Threaded Data Synchronization

Related Files

Application & Source Code

In addition to the benchmarks themselves just as significant is the source code used to perform them.  The benchmark application implements all of the synchronization methods being benchmarked plus also uses quite a lot of multi-threading itself.  I figured other developers might find it illustrative or at least interesting to take a look at. 

Screenshot of the benchmarking application

NOTICE: The application code is protected by copyright and provided only for reference purposes.  Please do not republish or copy any of the code.  The code works fine for my purposes here but various portions of the code (namely the included interfaced locking pattern implementations) have not been seriously tested and I would not trust the code for production use!  Other portions, such as the logging and GUI synchronization code, are overly simplistic and only applicable for this particular use.  I have extensive libraries which have been tested and used over many years which contain the locking, logging and other code I use for production implementations.  None of this code was not taken from my libraries, all of this code was thrown together over a weekend because I felt like playing with some locking benchmarks. 

I plan to release portions of my C# libraries via GitHub in the future and have already started some of the refactoring and minor cleanup necessary.  That will include the implementation of my Interfaced Thread Locking pattern along with much else.  Keep an eye on my blog at http://OriginalCoder.dev for news and future developments.

The benchmarking application is written using WinForms and uses a lot of multi-threading capabilities:

  • Creates numerous Thread instances that must be initialized, orchestrated and then cleaned up.
  • Performs all benchmarking in a background thread so that the GUI can continue being refreshed. 
  • Displays ongoing progress of the benchmarking in the GUI.
  • Allows the user to cancel benchmarking via the GUI.
  • Implements a simple thread-safe logging system.

The following are used from System.Threading:

  • Barrier
  • Interlocked.Increment
  • LockCookie
  • ManualResetEvent
  • Monitor
  • Mutex
  • ReaderWriterLock
  • ReaderWriterLockSlim
  • Semaphore
  • SemaphoreSlim
  • Thread
  • Thread.CurrentThread
  • Thread.Sleep

These are used from System.Threading.Tasks:

  • Task.Run
  • Task<TResult>
  • TaskStatus

Quick Multi-threading Overview

First a quick bit of background for anyone reading this who hasn’t done multi-threaded programming. 

When writing code that will be executed using multiple threads there are a lot of important considerations.  This is not an article about how to write multi-threaded code, so I’m not going to cover most of those topics.  The topic central to this article is synchronizing access to shared data.  When multiple threads are reading and writing shared data great care must be taken to make sure that data does not become corrupt and that the individual threads get accurate data when reading.  The more complicated and inter-related the data is the more likely problems would occur without proper synchronization.  But, to be clear, all multi-threaded code should always use appropriate synchronization strategies to ensure the shared data never gets corrupted and read operations don’t get bad data.

Essentially only a single thread should be allowed to write to shared data at a time and no threads should be allowed to read the shared data while a write is occurring.  The easiest way to implement this is to have a locking mechanism that only allows a single thread to hold the lock at a time and then have any thread needing to either read or write the data obtain the lock before doing so.  This is the simplest implementation (such as using the “lock” keyword) but can have performance issues if there are many threads and most of the time they only read data because the threads will block each other from reading simultaneously. For this reason, some locking mechanisms (such as ReaderWriterLock) support separate read and write locks which allow any number of threads to obtain a read lock simultaneously, but make write locks exclusive including denying reads when a write lock is held.

A quick example would be code in multiple threads working with the same Dictionary<string, object> structure.  If 2 separate threads were to make changes to the dictionary at the same time the internal data structures could get corrupted because the operations would likely step over each other.  Only a single thread should be allowed to write to shared data at one time, so an exclusive write lock should be obtained before making any changes.  That would ensure that the Dictionary never becomes corrupt, but threads that only read would still have problems if they are not included in synchronization.  Consider a simple case where a thread gets the Count of the dictionary which is then used to make a decision before attempting to read specific keys.  If the reading thread gets the count then gets suspended (put on hold) while another thread performs a write operation, when the reading thread continues the data in the dictionary has changes so the count is different, but the reading thread will continue doing its work as if the count was the same.  This makes the work performed by the reading thread invalid which is why reading also requires synchronization.

One other detail to note is that when implementing multi-threaded code it is important to reduce the amount of time spent inside any lock as much as possible.  Because many threads may be competing for access to the locks it is important to reduce contention as much as possible.  Never obtain a lock and then perform a bunch of work / calculations inside the lock when that work could have been done in preparation before obtaining the lock.  The ideal situation is where all work and calculations can be performed outside of the lock and then only a few quick & simple reads or writes are necessary from inside the lock.  That is, of course, not always possible.  If for example the total sum for a particular property for all objects contained in a shared list is required, then a read lock would need to be obtained before iterating through the list and computing the total.

Data Synchronization Options

The following list of .NET Framework classes all provide very similar capabilities to synchronize data in a multi-threaded environment.  All of these provide a locking mechanism that, once obtained, any number of operations can be performed while the lock is held.  Most of these only provide a single locking mechanism (for both reads and writes), but a couple provide for separate read and write locks.

  • lock (keyword)
  • Monitor
  • Mutex
  • ReaderWriterLock
  • ReaderWriterLockSlim
  • Semaphore
  • SemaphoreSlim

In addition, there is the “lock” keyword built into C# which is just a syntactic shortcut for using a Monitor.  I do benchmark these separately below, but for discussion I’ll just talk about Monitor.

Monitor and Mutex both provide the capability of a single exclusive lock.  When using either of these both read and write operations obtain the same lock.  This means that multiple reading threads will block each other.  If there were many threads trying to read they would essentially queue up waiting for the lock and then proceed through single file.  Depending on the exact implementation details this could remove any advantages of using multiple threads.

Semaphore and SemaphoreSlim also provide the capability of a single lock, but can potentially allow multiple threads to obtain one simultaneously depending on how they are configured.  For this article and the benchmarks I configured the Semaphores so that they only allowed a single thread to obtain a lock at one time which made them operate similarly to Monitor and Mutex.  As such the same caveats above about readers blocking each other and queueing up also apply to Semaphores in this context.

Both ReaderWriterLock and ReaderWriterLockSlim provide the same capabilities, but are different implementations.  These classes provide separate locks for reading and writing where any number of readers are allowed simultaneously but only a single write lock is allowed (which also blocks any readers when held).  Used properly these classes can greatly improve performance when many threads are executing and most operations are read only.  Because these classes provide more capabilities they can be implemented and used in different ways, so I’ve benchmarked each of these classes 3 different ways.  First using read locks when reading and then only obtaining a write lock when a write is necessary.  Second, always obtaining an upgradable read lock and then upgrading it to a write when necessary.  Lastly, I also benchmarked these where all operations obtained a write lock.  The last one, only using write locks, is bad practice and makes these classes operate similarly to a Monitor and Mutex (destroying the point of using these instead of one of those).  I do not at all recommend ever doing that in code, but I decided to benchmark the write only case just for informational comparison purposes.

Eliminating the use of either Reader/Writer class as write-only, that gives us 9 different implementation possibilities (including the “lock” keyword) for implementing data synchronization.  That’s 5 different implementations to do write-only locking and 4 different implementations to provide separate read and write locks.

Special case: Interlocked

The Interlocked class is something of a special case when it comes to data synchronization because it performs synchronization very differently than the other options.  The above synchronization methods provide a locking mechanism that, once obtained, any number of operations can be performed while the lock is held.  But the Interlocked class provides methods for atomic data synchronization.  Atomic means that the write operation occurs in a single step and will never conflict with other threads.  This only works for a single, simple data type (such as an integer counter) and does not apply to cases where there are multiple data items that are related and must be kept in sync.  For cases where Interlocked does apply it is by far the fastest and easiest to use.

Interlocked offers a number of possible operations, for the purpose of these benchmarks I only used Interlocked.Increment because the benchmarking code was written around reading and incrementing a shared integer counter.

My Interfaced Locking Pattern

Implementing locking code correctly can be a bit tricky and it is all too easy to make a mistake that results in a lock never being properly released.  If a lock is acquired and does not get released the system will freeze up.  For this reason, locking should always be implemented using try/finally blocks.  But even then it is quite easy to make a small mistake in the code that could cause the lock not to be released under certain circumstances.  Finding bugs of this nature is extremely difficult and time consuming because they tend to occur unpredictably and unless the system happens to freeze up for a developer while the debugger is running figuring out exactly where the problem occurs can be nearly impossible.

For these reasons any pattern that greatly reduces or eliminates the change of programmer error when using locks is highly desirable.  The lock keyword built into C# does this because behind the scenes it automatically adds a try/finally block that will always release the lock appropriately.  This is great if the using Monitor for locking aligns well with the work being done by the system.  But it does not help in cases where Monitor does not perform well (such as when there are many more concurrent reads than writes).

The solution I came up with years ago (before the lock keyword existed) was to create a pattern that uses a locking mechanism exposed via an interface that implements IDisposable to handle releasing of the lock.  With this pattern locks are always obtained via a using statement which will always ensure the lock gets released correctly (much like the lock keyword).  I’ve found this pattern works extremely well and since adopting it I can’t remember a single case where my code didn’t correctly release a lock. 

Do note that this pattern may not be 100% perfect though (especially depending on exactly how it is implemented).  In environments where a lot of threads get aborted using ThreadAbortException it may be theoretically possible for the exception to get thrown between when the lock is acquired and the using block takes effect.  I’m not completely certain about this and would need to carefully analyze the CIL that gets produced by the using statement to figure it out.  It would be a rare occurrence in any case and would never be a meaningful issue for systems that don’t utilize ThreadAbortException except during shutdown.  This is important to be aware of when considering adopting this pattern.  In my personal experience the elimination of coding errors for locks has been worth the slight risks in the situations I’ve used this pattern.  It is worth noting that I read something from Microsoft (release notes?) that stated the lock keyword also had this same risk under some circumstances in the past (I believe reading that they eventually fixed it).

The lock keyword coding pattern and my interfaced locking pattern are very similar except that the interfaced pattern requires instantiating a class that implements the interface before it can be used.  This can easily be viewed as a benefit though, because it allows the developer to choose the underlying locking mechanism instead of having it chosen for them (lock always uses Monitor).

The lock keyword coding pattern:

lock (someObject)
{
    // perform work inside of lock
}

My interfaced locking pattern:

private readonly _lock = new ThreadLockReadWrite();

using (_lock.ReadLock())
{
    // perform read-only work inside of lock
}

using (_lock.WriteLock())
{
    // perform work inside of lock
}

using (_lock.ReadUpgradableLock())
{
    // perform read work inside of lock
    using (_lock.UpgradeLock())
    {
        // perform write work inside of lock
    }
}

Use Cases

Benchmarking multi-threaded data synchronization isn’t as simple as running a few benchmarks and picking the fastest.  The environment in which the data synchronization will occur has a very large impact on which implementation performs the best.  There are a few key factors that help determine this which I’ve factored into the benchmarks.

Number of threads – How many threads will be running simultaneously and needed access to the shared data is an important consideration.  Separate benchmarks for 1, 2, 10 and 100 threads were performed.

Reads vs Writes – How often threads will only need read access compared to how often they will modify the data is a critical factor.  Separate benchmarks for writing 100%, 10% and 1% of the time were performed.

Amount of time locks are held – This can be a factor in the decision making, but for these benchmarks I’ve used consistent delay times to similar work being done inside read & write locks.  Since the goal is to keep time inside of locks to the minimal possible and the biggest impact here is how much time is spent in read locks this decision can be simplified.  The key factor here is that if threads need to do a lot of work inside read operations then it is important to make sure reads do not bock each other.  For these situations either ReaderWriterLock or ReaderWriterLockSlim should always be used with separate locks for reading and writing.

Recommendations based on the benchmark results

Interlocked

When the data to be synchronized is simple, not inter-related and lends itself to atomic operations use the Interlocked class.  While there are a few cases where the Reader/Writer locks perform equally well there are no cases where anything outperform Interlocked.  So use it whenever possible.

Mostly Writes

When most of the operations are writes – Use the lock keyword.  When most operations require write access all of the locking mechanisms (except Interlocked, see above) perform about the same.  The lock keyword provides a very nice coding pattern that prevents errors caused by locks not getting released.

ReaderWriterLock vs ReaderWriterLockSlim

The API for using the ReaderWriterLockSlim class is easier to work with compared to the ReaderWriterLock class.  This is particularly true when dealing with upgrading read locks to write locks.  Unfortunately, something is wrong inside the Slim class that can causes its lock upgrading to execute very slowly so I would highly recommend against ever using ReaderWriterLockSlim with upgradable locks.  When a lot of threads are involved the ReaderWriterLock (non-slim) class also tends to outperform the Slim variant by a noticeable amount.  Given those I’d recommend using ReaderWriterLock at this time instead of ReaderWriterLockSlim.

Read/Write vs Read/Upgrade

WARNING: If using the ReaderWriterLockSlim class DO NOT obtain an upgradable read lock and then upgrade to write locks.  The benchmarks show that, for some reason, this can execute VERY SLOWLY and tends to perform closer to mechanisms that only supply a single (write) lock such as Monitor.  Please note that, for whatever reason, this does not happen with the ReaderWriterLock implementation only the Slim variant!  Sadly the API for the slim variant is much easier to use and implementing read->upgrade with the non-slim class is somewhat of a pain to code.  This is another case where my interfaced locking pattern would be very helpful (since it hides the additional complexity required for the non-slim class).

When dealing with a modest number of threads obtaining a read lock than then a write lock when needed appears to run a bit faster than obtaining an upgradable read lock and then upgrading to a write lock when needed.  The catch is that with the non-upgradable the read lock must be released before obtaining the write lock which probably requires some additional thought to implement the threaded code correctly.  The performance difference doesn’t seem to be very much so the reduced complexity of read->upgrade is probably warranted for most circumstances.

Performance for read->upgrade was particularly good for the benchmarks with 100 threads where reads were 99.9% (only 1 out of 1,000 operations was a write).  So it may be worth considering using the upgradable locks when dealing with very many threads that only read the vast majority of the time.  But as mentioned elsewhere in this document never use the ReaderWriterLockSlim with upgradable locks because it has internal problems.

My Interfaced Locking Pattern

My most interesting take away from these benchmarks is how my interfaced locking pattern performs compared to direct, in-place implementations.  I had assumed that since the pattern instantiates objects to handle the IDisposable which are then discarded this additional work would have at least a small but noticeable impact on performance.  Surprisingly the interfaced vs direct implementations of the same underlying mechanism tended to perform equally well in the benchmarks.  This is because the amount of additional work for the interfaced pattern is small and the amount of simulated work in the threads is much more significant.  For cases where locks were held for extremely short durations the ratio would change and the interfaced pattern may perform slower than the direct alternative.  But, as noted above, the point of my interfaced locking pattern is to reduce bugs and program freezes so a minor performance cost would be fine in most cases.

Benchmark Results

Benchmarks with a Single Thread

If a single thread will be used all of the locking mechanisms are almost identical in their performance so which one is chosen doesn’t matter much.  Even Interlock.Increment doesn’t execute faster when only a single thread is used.  Something to consider: If there is only a single thread working with the data are you sure you need thread synchronization?  This might occur if most of the time only a single thread will be running but under certain circumstances / high load additional threads could be started.  If that is the case I’d suggest optimizing for when multiple threads are running.

Benchmarks with Two Threads

Mostly writes – When most (or all) operations are writes then Interlocked.Increment is about twice as fast as anything else and all of the alternatives are roughly equal in performance.  If possible use Interlocked, otherwise I’d suggest using the lock keyword for these situations.

10x Reads – When reads occur 90% of the time Interlocked.Increment is still fastest, but the Reader/Writer locks aren’t far behind.  All of the other mechanisms (that only implement a single write lock) are equally slower.  For high performance systems the additional complexity of reader/writer locks with separate read & write locks would be recommended.  But the simplicity of the lock keyword may outweigh the performance gains in some circumstances.

100x Reads – When reads occur 99% or more of the time the Reader/Writer locks are almost as fast as Interlocked.Increment with all of the other alternatives being much slower.  I’d definately recommend using a Reader/Writer with separate read & write locks in all cases under these circumstances.

10 Threads (close to the number of physical threads in my CPU)

Mostly writes – When most (or all) operations are writes then Interlocked.Increment is 10 times as fast as anything else and all of the alternatives are roughly equal in performance.  If possible use Interlocked, otherwise I’d suggest using the lock keyword for these situations.

10x Reads – When reads occur 90% of the time Interlocked.Increment is still fastest by a wide margin, so use that if possible.  The Reader/Writer locks perform much better (about 3.5 times faster) than the other single (write only) locking mechanisms.  I’d recommend using a Reader/Writer lock implementation with separate read & write locks under these circumstances.

100x Reads – When reads occur 99% or more of the time the Reader/Writer locks are almost as fast as Interlocked.Increment with all of the other alternatives being much slower.  Separate Reader/Writer locks perform about 8.5 times faster than the single (write only) lock alternatives.  I’d definitely recommend using a Reader/Writer with separate read & write locks in all cases under these circumstances.

Benchmarks with Many (100) Threads

Mostly writes – When most (or all) operations are writes then Interlocked.Increment is almost 100 times faster than anything else!  When possible definitely use Interlocked.  For locking situations that can’t be handled with Interlocked I’d suggest using the lock keyword.

10x Reads – When reads occur 90% of the time Interlocked.Increment is still by far fastest (~10 times faster than Reader/Writer locks).  When possible definitely use Interlocked.  If Interlocked can’t be used the Reader/Writer locks perform about 10 times faster than their single (write only) lock alternatives.  I’d definitely recommend using a Reader/Writer lock implementation with separate read & write locks under these circumstances.

100x Reads – When reads occur 99% or more of the time the Interlocked.Increment still pulls ahead with this many threads involved.  Interlocked.Increment is about 50% faster than using Reader/Writer locks but is far more restrictive.  I’d probably use Interlocked if possible.  Separate Reader/Writer locks perform about 15 times faster than the single (write only) lock alternatives.  I’d definitely recommend using a Reader/Writer with separate read & write locks in all cases under these circumstances.

Leave a Comment

Internalizing knowledge is very useful

The human mind is incredible. It can accomplished great things though conscious thought, but what it can learn to internalize and do automatically is remarkable. As a result learning to internalize software development principals and concepts can allow developers to accomplish much more.

General Examples

Starting from a young age in school we are taught the alphabet, then words, sentances, paragraphs, essays and beyond. There are many details and rules that we are taught over years. Over time these get internalized by the brain and become automatic processes. Years after leaving school when we write something we don’t think about nouns, verbs, adjectives, grammar trees, etc. All of that knowledge has become mostly automatic, so we just sit down and write. By not having to consciously focus on those details it frees up our minds to think about higher level concepts like our goals for the writing, composition, etc.

Later on most people learn to drive. Driving involves following a bunch of rules, doing multiple things simultaneously and being aware of the situation and surroundings moment to moment. Not making mistakes is important because accidents are dangerous. When first starting out driving is really hard and kinda scary. It can feel overwhelming to do all of the things required constantly. But jump ahead after 10+ years of continuous daily driving and its so easy we don’t even pay conscious attention. The mind can internalize everything required for driving so thoroughly that one can drive between home and work and, upon arrival, not even be able to remember anything about the drive. Its like being on auto-pilot.

The human mind is able to internalize all sorts of things if the individual does them frequently for a long period of time. Years after something has been internalized it becomes difficult to consciously recall the rules and facts required by the task that has become automatic. This is beneficial because when things become automatically handled by our subconscious it frees up our conscious to think on a higher level about the task or (such as for driving) to do other unrelated things.

Software Development & Myself

Applying this to myself, I’ve spent so much time doing software development for so long that I’ve internalized a great deal.

I started programming when I was 7 or 8. By 11 I was pretty good; By 14 I was doing really complex, detailed programming. I really enjoyed development so I spent a ton of time learning as much as I could and constantly improving myself. I had no clue, but by the time I graduated high school I was significantly better at software development than most college graduates. From there I went on to tackle bigger, more complex and more unusual projects.

During my early years there were incredibly few books available, and none in normal book stores. If you could find them they had to be mail ordered. Thus I figured out most of the fundamental concepts, patterns, etc. myself before I ever read or heard about them. The “SOLID” principals had not been defined, but I figured them out for myself (except dependency inversion which wasn’t really practical at the time). I learned the concepts of object oriented programming and got very good at it 20+ years ago. I was doing serious, complex architecture work before I ever heard the term “software architecture”.

As a result of all that time and effort I’ve internalized a great deal of software development practices. I haven’t consciously thought about fundamental principals, OOP concepts or the like in many years. I don’t think about fundamental design patterns or optimization either. They have all become obvious to me now and they get applied automatically. As I write code all of that gets baked in automatically. Heck, I even subconsciously optimize my code, implementation and architectures on various levels with little thought. Its truly amazing what the subconscious can be taught to do. Which is great because having all that be subconscious allows me to consciously focus on business requirements, unusual aspects and the big picture.

As a side note, this is what partially (mostly?) inspired my logo.

Extrapolation & Early Development

I’d imagine anyone can learn to do this level of internalization given enough time. This is why spending a lot of time doing software development for a long time really pays off.

I do sometimes wonder if starting at a very early age makes a difference. Apparently being bilingual from a young age causes long-term changes in brain. Source code is kind of a language, I wonder if that has any long-term impact.

Leave a Comment
Site and all contents Copyright © 2019 James B. Higgins. All Rights Reserved.