Using ExtensionMethods with Moq to Make Your Unit Tests More Readable

As you may have noticed from last week’s post I’ve been doing a lot of Unit Testing work recently. The product I’m working on is huge and very complex and this makes Unit Testing (as well as general coding) a real challenge. I’ve decided to write a more technical post this week focusing on a trick I’ve found which I believe really helps improve the quality of your tests.

One of the tools we’re using is Moq, a mocking framework (much like RhinoMocks or NSubstitute (one I’ve not used but comes recommended)).

One of the biggest challenges developers face when creating Unit Tests is maintaining readability as the test grows. Consider the following typical test

[Test]
public void TypicalTestSetup()
{
  // Arrange
  var mock = new Mock<IDataAccess>();
  mock.Setup(x => x.GetIDOfRowWithGuid(It.IsAny<Guid>())).Returns(12);
  mock.Setup(x => x.GetValueFromDatabase()).Returns(1701);
  mock.Setup(x => x.SaveValue(73)).Returns(36);
  var sut = new BusinessLogic(mock.Object); 

  // Act
  var result = sut.DoWork(7);

  // Assert
  Assert.Equals(42, result);
}

As the mock is an object there’s no reason you can’t create ExtensionMethods for them. If I create the following method

 

/// <summary>
/// Sets up the method GetIDOfRowWithGuid to return the value given
/// </summary>
public static void SetupGetIDOfRowWithGuid(this Mock<IDataAccess> mock, int value)
{
  mock.Setup(x => x.GetIDOfRowWithGuid(It.IsAny<Guid>())).Returns(value);
}

 This allows us to call the setup code much more succinctly

mock.SetupGetIDOfRowWithGuid(12);

Personally I also like to return the mock as part of the ExtensionMethod

/// <summary>
/// Sets up the method GetIDOfRowWithGuid to return the value given
/// </summary>
public static Mock<IDataAccess> SetupGetIDOfRowWithGuid(this Mock<IDataAccess> mock, int value)
{
  mock.Setup(x => x.GetIDOfRowWithGuid(It.IsAny<Guid>())).Returns(value);
  return mock;
}

Because this allows you to create more more fluent style setup instructions.

// Arrange
var mock = new Mock<IDataAccess>()
  .SetupGetIDOfRowWithGuid(12)
  .SetupGetValueFromDatabase(1701)
  .SetupSaveValue(73, 36);
var sut = new BusinessLogic(mock.Object);			

This, in my opinion is far more readable, simplifies your setup and can be used to create powerful, reusable setups and verifies.

Have you used ExtensionMethods with mocks? What other tips and tricks do you use to keep your Unit Tests in order?

Starting Unit Testing in a Huge Codebase

Most people agree that Unit Tests are a good idea, and most developers try to write them (with varying degrees of success). But the challenge of creating Unit Tests for existing projects can be incredibly daunting to developers.

Many people may not see the need, if an area of code has been working for quite some time then why introduce tests which will take a lot of time and (if you need to refactor to make them work) introduce risk in an area? When I wrote about the value of Unit Tests back in 2015 I postulated that the value of these small executable is not in finding bugs, it’s in preventing bugs in the future.

Unit Testing, unlike exploratory is not about finding issues with existing code, it’s about reinforcing and documenting (through executable code) exactly how each function, class, and property should behave in particular circumstances. If you understand this then you’ll see that the value of Unit Tests does not diminish with an established solution (even if most of the teething bugs have already been worked out). In fact it makes your existing code bases much easier to safely maintain!

So the question becomes how and where do we start?

I’m currently working on an application which has in excess of five million lines of code in it. Some parts are new and some date back to the project’s inception. Writing a full set of tests for the entire application is a monumental (and I have to admit largely pointless) exercise.

What we need to do is look at the areas which are most in flux. If Unit Tests are a technique for helping to protect our software against unexpected change then the areas where they deliver the most benefit are the areas of code which change frequently.

We’re engineers, not psychics (at least I’m not) so use metrics. Look at your Source Control history and see which classes are subject to frequent change (both bug fixes and feature work applies). Target your Unit Tests here, use the 80:20 rule and target your efforts in the right place. One of my favourite sayings applies here… how do you eat an elephant? A little at a time!

Slowly, little by little the classes which see the most changes will stabilise and you’ll introduce less bugs when expanding them or fixing existing defects.

These are my thoughts, do you have any experiences breaking down huge applications into Unit Tested code? How did you do it?

Why Branching still has a Place in the Agile World

I’m sure every developer out there would love to have a single code base with builds which are automatically tested and pushed out to customers. However, let’s assume for a moment that you’ve not got a full CI system with triggered builds, automated testing, and thousands of automated deployments a day.

For the sake argument let’s say you’ve got some of these pieces in place. Perhaps you have nightly builds? Maybe even some unit tests! But if you’re like most of the companies I’ve worked with you still run manual signoff scripts and have a couple of guys who do the deployments, know every config setting by heart and can get your application to run in all kinds of new and innovative ways.

Despite what The Phoenix Project tells us deployments to customers are still a big deal and version management (finding a way to release bug fixes but not new features) is a fact of life.

If your customers are anything like mine they’ll be incredibly nervous about taking new features. They want lengthy UAT phases and opportunities to train their staff on new functionality. This may seem like Waterfall to us, but remember many of our clients have stakeholders in some very big business. In my industry (Health and Leisure) for example we have code freezes over the January period while New Years Resolutioners and marketing campaigns are at their peak. Very few businesses would open themselves up to the risk of IT failures during these critical times.

And yet support contracts and maintenance doesn’t stop. The busy times are when the software is really put to the test and we must be able to respond to any issue which may arise.

This is why we use maintenance branches.

Our job as developers and IT professionals is to deliver a good service to our customers. We need tools and processes to do this. Agile and Continuous Integration are two of those tools but if they don’t help us meet business needs then we should seek others which do.

My customers tell me that they want to be able to take patches quickly if required. That they want to be able to access fixes without the need to test and learn new functionality.

Techniques such as feature toggles help but in my view the only way to truly meet this need is to cut a release branch after each feature (or for convenience block of features) is completed. We usually use a minor version to represent this. Using this model we can support customers without surprising them with new functionality and continue to develop knowing that we can put out maintenance releases of older version at any time.

Controversial? Perhaps, practically over purity… I hope so. What are your views or release branches and support customers with on premise installations?

Have you tried At Desk Testing?

Last week I wrote about the value of finding issues early. How it becomes increasingly expensive and time consuming to fix issues the further down the development lifecycle you get. With that in mind we can now appreciate that anything we can do to find bugs earlier makes our software not only better but cheaper to develop.

Something we’re trialling at the moment are at desk demos. The idea is simple, before signing off a piece of work and passing it onto the next link in the chain (Dev to QA, QA to Support Analyst, Support Analyst to Dev and so on) you demonstrate the issue or feature to them.

For example, before I finish a feature and pass it onto someone who specialises in testing I invite my buddy over to “give it a bash”.

Remember last week? I talked about the time it takes to move from on link in this chain to another. I discussed how it can take a few hours to build your software, another hour or so to deploy it, and a day to run the signoff scripts (obviously this varies if you’re fortunate enough to be working on a ‘modern’ solution or have invested in some proper CI). Time moving backwards is time wasted, if you can avoid rework then you should always take the opportunity to do so!
By offering up my work to the QA for a few minutes before formally handing over can save hours of wasted time. These guys know what they’re looking for and can often find edge cases and give feedback on a few scenarios you’ve not considered. By having these pointed out to you early you’re saving all this extra time!

The same theory can be applied to a Support Analyst demoing bugs to a Developer rather than just recording replication steps on a ticket or a Developer showing a bug fix to the same analyst before shipping it to a customer’s UAT environment for testing.

So far it’s working well for us. Do you demo before handing over? Do you feel it works for your team?

The Cost of Fixing Bugs Late

Have you ever been in a situation where you’ve written a piece of code and it’s almost there but there are a few niggling issues which “can wait for V2?”. Have you ever wished, maybe months later that you’d gone back and resolved them at the time? You’re not alone!

More and more developers and managers are beginning to realise the true cost of fixing bugs late, not only on their sanity but on the company’s time and money.

Let’s say for the sake of argument that you’ve finished a piece of code except for a few edge cases. You’re under a lot of time pressure so you decide it’s of a high enough quality and you push it through to QA. The cost to the business of fixing that issue actually increases dramatically the longer it’s left.

Let’s look at a few scenarios:

You fix the issue immediatly

  • Fix time – 30 minutes

You send to QA and it’s rejected there

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Fix time – 30 minutes

QA passes as an “edge case” but the customer disagrees

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Deploy to UAT – 3 hours
  • Customer UAT – 1 week

Missed in UAT but discovered as a significant issue in Live

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Deploy to UAT – 3 hours
  • Customer UAT – 1 week
  • Customer live deploy – 3 hours

 

As you can see, even with these rough estimates the time it takes to resolve the customer’s issue increases dramatically the longer it’s left. Not only that but the cost of time to the business of paying staff to run extra QA cycles or rounds of UAT spirals out of control. We haven’t even considered the final case where it goes to the bug queue to die and a completely different developer has to learn the feature and resolve the problem.

The graph is actually a very common shape*.

graph-2

I’m a big fan of practicality, no software is going to be perfect and it’s unrealistic to expect that you’re going to find and fix each and every issue before shipping to a customer.

However, hopefully this has made you stop and think and has provided a strong argument for making that extra 30 minutes to resolve the issue before it causes a real headache for the business!

*Thanks to fooplot.com for the graphing software.

Why you MUST run your Unit Tests as part of your Build Process

I was browsing StackOverflow the other day (as many geeks are known to do from time to time) and read an answer which describes writing Unit Tests as being like going to the gym. Now, I’m not sure about the final few sentences about finding another job but I do really like the analogy. It’s hard when you start out but the more you do the easier it gets and the more value it adds.

Now, like going to the gym there are couple of fundamentals which if forgotten can undermine everything. Unit Tests don’t require proper warmups or cool downs but they absolutely, definitely, must be run on the build server for their real value to be exploited.

Let me explain why.

Developers are often hackers by nature, we install different things and experiment. That’s what makes us good at our job, it’s also what makes every developer’s computer slightly different so software which runs on one machine could behave very differently on another.

What’s worse, if your developer is in a rush or having a bad day they may completely ‘forget’ to execute the tests before committing code. Now, instead of vigorously tested and proven code being added to source control you’ve got a risk.

Tools like NCrunch help reduce this but the only absolutely foolproof way to ensure that all your tests are executed and passed before deployment is to get your server to run them. Missing out this vital link means you’re opening your build and deployment process up to human error, something you should be very nervous of doing.

Almost all build servers have the capability of running tests so dig out the manual and make sure that when the tests fail, the build fails!

Playing with Microsoft Cognitive Services

It’s time for a little fun – inspired by Martin Kearn‘s talk at DDDNorth I fired up the Microsoft Cognitive Services Website, signed up and got myself a key for the emotion API. It didn’t take me long to get up and running!

I tried briefly using the SDK library but found some trouble with strongly typed names, deciding the abandon that approach I used the method Martin recommends on his blog and simply downloaded the json myself and parsed it.

Here’s my code:

        var _apiUrl = @"https://api.projectoxford.ai/emotion/v1.0/recognize";

        using (var httpClient = new HttpClient())
        {
          httpClient.DefaultRequestHeaders.Add(&amp;amp;quot;Ocp-Apim-Subscription-Key&amp;amp;quot;, _apiKey);
          httpClient.DefaultRequestHeaders.Accept.Add(new MediaTypeWithQualityHeaderValue(&amp;amp;quot;application/octet-stream&amp;amp;quot;));

          //setup data object
          HttpContent content = new StreamContent(new MemoryStream(File.ReadAllBytes(filename)));
          content.Headers.ContentType = new MediaTypeWithQualityHeaderValue(&amp;amp;quot;application/octet-stream&amp;amp;quot;);

          //make request
          var response = await httpClient.PostAsync(_apiUrl, content);

          //read response and write to view
          var responseContent = response.Content.ReadAsStringAsync().Result;
          var objResponse = JsonConvert.DeserializeObject&amp;amp;amp;lt;RootObject[]&amp;amp;amp;gt;(responseContent);

          var image = Image.FromFile(filename);
          using (var gfx = Graphics.FromImage(image))
          {
            foreach (var face in objResponse)
            {
              gfx.DrawRectangle(new Pen(Color.Red, 5),
                new Rectangle(new Point(face.faceRectangle.left, face.faceRectangle.top),
                  new Size(face.faceRectangle.width, face.faceRectangle.height)));

              var text = this.GetText(face.scores);

              gfx.DrawString(text, new Font(FontFamily.GenericSerif, 64), new SolidBrush(Color.Red),
                face.faceRectangle.left + 24,
                face.faceRectangle.height + face.faceRectangle.top);
            }
          }
          this.pictureBox1.Image = image;
        }

I’m very impressed with how easy it is (in fact most of it’s being used to update the picture box in my little Winforms app!).

What do you think? Do I look about 65% surprised?

emotion_screenshot

My first foray into AngularJS

I’ve been aware of AngularJS for some time, a talk by Craig Norton at Agile Yorkshire peaked my interest but I’m ashamed to say that I’ve never invested the time to look into it. A few comments colleagues and blogs have made about breaking changes in V2 have put me off somewhat, is Angular turning into another Silverlight?

We’ve been developing an in house DevPortal at work and a colleague of mine was very keen to use Angular and Bootstrap to investigate their value in bringing into our core product.

Over the past few weeks I’ve been feeling somewhat left behind, unable to contribute to some exciting process changes because I wasn’t up to speed with the technology. Not being familiar (or particularly liking) my lack of understanding and knowing how it feels to be the only evangelist on a team I’ve invested a little time this weekend to try and understand the basics of Angular.

With the help of PluralSight I’ve started covering the basics, I’m not very far through yet but can already remember what intrigued me at Craig’s talk. This is HTML and Javascript, but built in a powerful framework you’d expect from a .NET application.

This is the Hello World of AngularJS, we create a module and a controller and bind the value of the helloMessage to the variable defined in $scope.

<!doctype html>
<html ng-app="app">
<head>
</head>
<body>
<h1 ng-controller="HelloWorldCtrl">{{helloMessage}}</h1>
<script src="https://code.angularjs.org/1.4.0/angular.js"></script>
    <script type="text/javascript">
      angular.module('app', []).controller('HelloWorldCtrl', function ($scope) {
$scope.helloMessage = "Hello World";
})
</script>
</body>
</html>

I’m looking forward to seeing what’s in the second module!

Multiple Binding Attributes in SpecFlow

I recently discovered something rather nice in SpecFlow, I was implementing a scenario like this

Scenario: Save a user
Given I have a user with the name Joe Bloggs
When I save them
Then the user should have a FirstName Joe
And the user should have a LastName of Bloggs

I wanted to provide the flexibility in the assertions so our QA could decide how he wanted to phrase the text in the scenario. Logically however we’d want the same binding for each variation.

Here’s what I came up with:

[Then(@"the user should have a (.*) (.*)")]
[Then(@"the user should have a (.*) of (.*)")]
public void ThenTheUserShouldHaveA(string field, string value)
{
  var user = GetUser();
  Assert.AreEqual(user.Properties[field], value);
}

However this didn’t work, I kept getting a field of “FirstName of”. I discovered however that you can reverse the binding attributes to give priority.

Updating the attributes to

[Then(@"the user should have a (.*) of (.*)")]
[Then(@"the user should have a (.*) (.*)")]

This change gave the of binding precedence and ensured that both scenario steps worked correctly.

What’s new (and cool) in C# 6?

Many of the improvements in C# 6 are based around Rosyln, rebuilding the compiler and making it much easier to write Visual Studio plugins and doing fancy compilation in our own applications. This is all very exciting stuff but it doesn’t really impact me day to day. There are however two little features which I am using every day and have already slipped into my standard syntax.

The Null Conditional Operator

The Null Conditional Operator is game changing. Last year we’d have to perform a series of null checks to ensure that we weren’t going to receive a NullReferenceException when accessing the child properties of an object.


if(user != null && user.Communication != null && user.Communication.PhoneNumber != null)

{

  user.Communication.PhoneNumber.Call();

}

However with the introduction of the ?. operator we can do this all in a single line.


user?.Communication?.PhoneNumber?.Call();

If the user has both a Communication object and a PhoneNumber then call them.

This becomes even more powerful when combined with our old friend the ?? operator.


var numberToCall = user?.Communication?.PhoneNumber ?? DEFAULT_CONTACT_NUMBER;

Now if user or Communication are null instead of throwing a NullReferenceException they will evaluate as null. Now if either user, Communication or PhoneNumber are null the entire statement will return the DEFAULT_CONTACT_NUMBER instead.

String Interpolation

The new String Interpolation change is so simple but I’ve found myself using it almost exclusively since it became available.

In the past we’d write


var myMessage = string.Format("Welcome back {0}", user.Salutation);

This was fine, perhaps a little clunky when you had a lot of stings to merge in but we were happy. That was until I discovered the new C# 6 String Interpolation!


var myMessage = $"Welcome back {user.Salutation}";

Not only is this more concise, but it also eliminates the frustration we’ve all experienced when mixing up the position of the arguments.

Two simple changes, two huge reasons to use C# 6 now!