My first foray into AngularJS

I’ve been aware of AngularJS for some time, a talk by Craig Norton at Agile Yorkshire peaked my interest but I’m ashamed to say that I’ve never invested the time to look into it. A few comments colleagues and blogs have made about breaking changes in V2 have put me off somewhat, is Angular turning into another Silverlight?

We’ve been developing an in house DevPortal at work and a colleague of mine was very keen to use Angular and Bootstrap to investigate their value in bringing into our core product.

Over the past few weeks I’ve been feeling somewhat left behind, unable to contribute to some exciting process changes because I wasn’t up to speed with the technology. Not being familiar (or particularly liking) my lack of understanding and knowing how it feels to be the only evangelist on a team I’ve invested a little time this weekend to try and understand the basics of Angular.

With the help of PluralSight I’ve started covering the basics, I’m not very far through yet but can already remember what intrigued me at Craig’s talk. This is HTML and Javascript, but built in a powerful framework you’d expect from a .NET application.

This is the Hello World of AngularJS, we create a module and a controller and bind the value of the helloMessage to the variable defined in $scope.

<!doctype html>
<html ng-app="app">
<head>
</head>
<body>
<h1 ng-controller="HelloWorldCtrl">{{helloMessage}}</h1>
<script src="https://code.angularjs.org/1.4.0/angular.js"></script>
    <script type="text/javascript">
      angular.module('app', []).controller('HelloWorldCtrl', function ($scope) {
$scope.helloMessage = "Hello World";
})
</script>
</body>
</html>

I’m looking forward to seeing what’s in the second module!

Multiple Binding Attributes in SpecFlow

I recently discovered something rather nice in SpecFlow, I was implementing a scenario like this

Scenario: Save a user
Given I have a user with the name Joe Bloggs
When I save them
Then the user should have a FirstName Joe
And the user should have a LastName of Bloggs

I wanted to provide the flexibility in the assertions so our QA could decide how he wanted to phrase the text in the scenario. Logically however we’d want the same binding for each variation.

Here’s what I came up with:

[Then(@"the user should have a (.*) (.*)")]
[Then(@"the user should have a (.*) of (.*)")]
public void ThenTheUserShouldHaveA(string field, string value)
{
  var user = GetUser();
  Assert.AreEqual(user.Properties[field], value);
}

However this didn’t work, I kept getting a field of “FirstName of”. I discovered however that you can reverse the binding attributes to give priority.

Updating the attributes to

[Then(@"the user should have a (.*) of (.*)")]
[Then(@"the user should have a (.*) (.*)")]

This change gave the of binding precedence and ensured that both scenario steps worked correctly.

Hello Kitty

It seems like a weekly occourance nowadays, whenever you turn on the news or open a technical blog you’re faced with another story where a huge organisation has been hacked and personal information has been stolen.

This time it is Hello Kitty who have lost their members’ personal information. According to wired.com more than 3.3 million user accounts have been breached. Unfortunately this is not uncommon, over the last few years we’ve heard similar stories from Sony, Ashley Madison and Experian.

What strikes me time and time again when I read these articles is how poorly my personal information is protected. Let’s look at the Hello Kitty story, wired.com tells us that users passwords were hashed with SHA1 but not salted.

Any computer science graduate or junior developer should be able to tell you that this is insufficient! I consider myself a geek so I’m always interested by this sort of thing. Let’s spend a minute to work out what’s going on…

A hash is basically a repeatable one way encryption algorithm. If you use SHA1 to hash a piece of text (such as a password) it’s trivial to repeat but (almost) mathematically impossible to work out the original string from the hashed value. This is great for developers, I save the hashed value in my database and when you enter your password I simply hash the value you gave me and see if it matches my saved version. Neither you, me or any hacker can work out what your password is even if they have somehow stolen my entire database (which I’m also protecting between DMZs, firewalls and physical security).

At first glance the security used at Hello Kitty should have been enough. Because the hackers who stole the data can’t dehash the passwords they can’t use them to log in, try them on other sites (such as your email account) you’d think this information has limited value.

This is where there’s a gaping flaw in the security and unfortunately it’s down to the users of the site. Any hacker worth their salt (pun intended) knows the most commonly used passwords. Which means I don’t need to know your password, all I need to do is hash “123456” and see which of the 3.3 million users used it. Next I try “password”, then “12345”. You won’t match everyone, but how many of these 3.3 million will use one of the top 100 most common? The top million?

So what’s the solution? We use a salt. This is what Hello Kitty should have done, a salt is a random piece of data which is appended to the password. This is often a piece of random text unique to the user which is concatenated onto the user’s password before the hashing process. This time, even if the hacker has access to the salts their lookup table of common passwords is rendered useless. They would need to calculate the top passwords with the salt for each member, a this is a hugely expensive operation and we’re back to brute force. 

Some of the most recent algorithms involve hashing a password multiple times. The effort for us to hash the hash of a password a few hundred or thousand is negligible but our hacker? They’ll have to calculate the hash of every password they want to try, for each member individually multiplied by the number of times you iterate!

I don’t want to go any further into the technical side. What I take you to take away is that somewhere, someone made a decision at Hello Kitty that a basic SHA1 hash was sufficient. That’s not only a huge error of judgement but it has ultimately left their clients feeling vulnerable as passwords are distributed across the web and has destroyed their investor confidence as the company’s reputation is dragged through the mud. Don’t let this happen to your company, don’t let this happen to your users!

Here’s my message.

  • If you’re a developer and you are not confident in your system’s security, don’t be afraid of looking foolish. Raise the questions.
  • If you are a manager who’s reportee raises a security concern, take it seriously. Your entire teams’ livelihood and clients’ confidence is at risk.
  • If you’re an Internet user, look at the common passwords list and avoid them like the plague. Use strong passwords and don’t repeat them across sites. If you don’t feel confident, don’t hand over your personal information!

What’s new (and cool) in C# 6?

Many of the improvements in C# 6 are based around Rosyln, rebuilding the compiler and making it much easier to write Visual Studio plugins and doing fancy compilation in our own applications. This is all very exciting stuff but it doesn’t really impact me day to day. There are however two little features which I am using every day and have already slipped into my standard syntax.

The Null Conditional Operator

The Null Conditional Operator is game changing. Last year we’d have to perform a series of null checks to ensure that we weren’t going to receive a NullReferenceException when accessing the child properties of an object.


if(user != null && user.Communication != null && user.Communication.PhoneNumber != null)

{

  user.Communication.PhoneNumber.Call();

}

However with the introduction of the ?. operator we can do this all in a single line.


user?.Communication?.PhoneNumber?.Call();

If the user has both a Communication object and a PhoneNumber then call them.

This becomes even more powerful when combined with our old friend the ?? operator.


var numberToCall = user?.Communication?.PhoneNumber ?? DEFAULT_CONTACT_NUMBER;

Now if user or Communication are null instead of throwing a NullReferenceException they will evaluate as null. Now if either user, Communication or PhoneNumber are null the entire statement will return the DEFAULT_CONTACT_NUMBER instead.

String Interpolation

The new String Interpolation change is so simple but I’ve found myself using it almost exclusively since it became available.

In the past we’d write


var myMessage = string.Format("Welcome back {0}", user.Salutation);

This was fine, perhaps a little clunky when you had a lot of stings to merge in but we were happy. That was until I discovered the new C# 6 String Interpolation!


var myMessage = $"Welcome back {user.Salutation}";

Not only is this more concise, but it also eliminates the frustration we’ve all experienced when mixing up the position of the arguments.

Two simple changes, two huge reasons to use C# 6 now!

Why write Unit Tests?

Testing is a passion of mine and it’s something I expect to write about a lot more in the future. What I feel is discussed less often though is “Why are we writing tests?”

When many people talk about writing tests they talk about writing Unit Tests, classes and methods broken down into small, isolated areas of code which can be examined with tests to guarantee code quality. Let’s look at an example:

public string ReadText(string filename)
{
  if(File.Exists(filename))
  {
    return File.ReadAllText(filename);
  }
  else
  {
    return null;
  }
}

This code reads the text in a file and returns it, if the file doesn’t exist it returns null.

The thing is, this is trivial code. Writing Unit Tests for something like this is overkill surely? That thirty minutes or so could be invested in the next feature, a bug fix or meeting the tight deadline.

Let’s suppose another developer comes along in a few years time, they see this method and know something their predecessor didn’t. .NET has a FileNotFoundException exception! Deciding to be a conscientious coder they update the method to throw an appropriate exception if the file isn’t found.

public string ReadText(string filename)
{
  if(File.Exists(filename))
  {
    return File.ReadAllText(filename);
  }
  else
  {
    throw new FileNotFoundException(string.Format("The file '{0}' was not found", filename));
  }
}

Unfortunately our conscientious developer missed something. One of the myriad of methods which calls our methods is GetOrCreateFileWithContents

public string GetOrCreateFileWithContents(string filename, string defaultContents)
{
  var currentContents = ReadText(filename);
  if(currentContents == null)
  {
    currentContents = CreateFile(filename, defaultContents);
  }

  return currentContents;
}

Because of our change this method now fails, ReadText throws an exception and the new file is never created in it’s place. This may be an oversimplified example with an overzealous and (dare I say it careless) tidy up but it illustrates the risks we take every day when refactoring and improving code.

This is the true value of Unit Tests, not in finding bugs but in defining the behaviour of the method. If our original developer had invested that extra thirty minutes our Boy Scout would have had some warning when then tried to update the method, they’d have seen that they’d changed the behaviour of the class in an unacceptable way and wouldn’t have made their changes.

The moral of the story? Unit Tests may seem like overkill while you’re writing them. But spare a thought for the poor soul who’s trying to read your methods in a few years time… Leave them a map, a series of executable tests which guarantee that your required behaviour remains unchanged. Your thirty minutes could save them hours, prevent bugs being introduced and help keep your application stable.

SpecFlow what is it and why should I care?

I was introduced to SpecFlow a year or so ago by a friend of mine during a presentation at Agile Yorkshire. He described it as a tool for BDD and I was immediately struck by the revolutionary concept of human readable tests. It’s taken me a little time, some projects and many follow up questions but I feel I’m really beginning to grasp the power of this easy to use tool.

So what is it? SpecFlow is a framework which translates text into a series of steps which are compiled as a test.

This makes more sense with an example. Let’s say I’m the developing the software for a cash machine.


Given I have £150 in the bank
When I withdraw £100
Then I should have £50 remaining in my account

SpecFlow will then generate three steps for you to implement.

[Binding]
public class Bindings
{
  [Given(@"I have £(.*) in the bank")]
  public void GivenIHaveInTheBank(int p0)
  {
    ScenarioContext.Current.Pending();
  }

  [When(@"I withdraw £(.*)")]
  public void WhenIWithdraw(int p0)
  {
    ScenarioContext.Current.Pending();
  }

  [Then(@"I should have £(.*) remaining in my account")]

  public void ThenIShouldHaveRemainingInMyAccount(int p0)
  {
    ScenarioContext.Current.Pending();
  }
}

I’ve filled these in quickly to give you an idea of how the process is intended to work.

[Binding]
public class Bindings
{
  private readonly CashMachine _cashMachine;

  public Bindings(CashMachine cashMachine)
  {
    _cashMachine = cashMachine;
  }

  [Given(@"I have £(.*) in the bank")]
  public void GivenIHaveInTheBank(decimal initialBalance)
  {
    _cashMachine.Balance = initialBalance;
  }

  [When(@"I withdraw £(.*)")]
  public void WhenIWithdraw(decimal withdrawlAmount)
  {

    _cashMachine.Withdraw(withdrawlAmount);

  }

  [Then(@"I should have £(.*) remaining in my account")]
  public void ThenIShouldHaveRemainingInMyAccount(decimal expectedRemainingBalance)
  {
    Assert.AreEqual(expectedRemainingBalance, _cashMachine.Balance);
  }
}

Behind the scenes SpecFlow has built a test using your test framework of choice (MSTest, NUnit, XUnit or a variety of others). These tests can be run within Visual Studio as any other unit test. Here’s my scenario running in Resharper:

 

Resharper Executing Test

 

I think most people would be impressed by this. SpecFlow has converted human readable text and enabled us to write a simple unit test for it. We can easily add a second to make sure that our new state of the art cash machine can also dispense pennies.


Given I have £100.86 in the bank
When I withdraw £1.99
Then I should have £98.87 remaining in my account

We can add additional complexity such as overdrafts


Given I have £100 in the bank
And I have an overdraft of £50
When I withdraw £125
Then I should have £-25 remaining in my account

Or functionality to check that I cannot go overdrawn


Given I have £100 in the bank
And I have an overdraft of £0
When I withdraw £125
Then I should be prevented from withdrawing money
And I should have £100 remaining in my account

I’m sure you can see the value of having simple, understandable definitions of what a particular area of code should do instead of the complicated, developer-only Unit Tests which litter many of today’s solutions. If you wish these executable tests can form the basis of your specs, documentation and even training. Your Product Owner can get involved and validate the tests are correct, if they’re particularly tech-savvy they may even write a few themselves!

What I particularly love about SpecFlow however is the way it empowers today’s QAs. From the moment the first bindings have been created your QA team can be let loose to create new tests, verify edge cases and prove alternate scenarios until they are happy that the feature works as designed. Tests can now be created alongside the features being developed, Acceptance Tests can be created directly from the specification which the QA team can use to validate each feature and scenario as it is being developed. These same tests then form the basis of your regression suite for years to come.

Should any of these tests fail the QA team can provide the developer with not only the human readable replication steps but with with an executable test which can be debugged on the dev’s own PC.

I find this very exciting, we’ve started using SpecFlow for a number of projects and I have every intention of using it for more. If you’re interested in finding out more visit the SpecFlow website or read their Quick Start Guide.

Learning the role of Scrum Master at the deep end

My induction into my informal position of Scrum Master was rather quick, a member of staff was leaving and the team needed someone to help run the meetings. Learning fast and jumping straight in at the deep end is my typical (if not favourite) way of learning so I rolled up my sleeves, read a few books and did my best.

So what have I learned? What dubious advise could I pass onto any developer who finds themself stepping into the same role?

Know your Ceremonies

The Scrum Master’s week is punctuated by meetings such as Daily Standups, Planning Sessions, Sprint Review and Retrospectives. You’ll encounter resistance to these because devs like writing code, they don’t enjoy spending hours each week sat in the meeting room. You’ll need to justify these sessions, if you can’t explain why you want to meet then it’s best not to!

Daily Standup

This is the most common session you’ll be expected to run. It’s also the most famous, many businesses claim to be agile simply because they have a daily standup! Getting it right is key to your team’s success.

First the obvious question, do you actually need to stand up? No! People who have never attended a standup will challenge this methodology, there are advantages for and against standing up for the meeting and you need to understand both sides if you want to discuss the reasons. On the one hand physically standing encourages participants to keep their part short and on topic, I also like to guide speakers around the circle so everyone knows when they will be expected to talk and no one is left stammering. On the other hand standing up can be very difficult for distributed teams, meeting over Skype with headsets reduces lot of background noise but brings with it the inevitable technical problems and opens the unending source of distractions which is a dev’s PC. There’s no right answer, but different teams will prefer one style or another.

Keeping on topic is a huge challenge and one I’m still struggling with. Your standup should consist of three bullet points

  • What did you do yesterday?
  • What are you aiming to do today?
  • Are you blocked in any way and are you on track to burn down?

In my experience there are two main reasons for a scrum wandering off topic, these are Story Telling (going into too much detail about what they did yesterday) and Problem Solving (using everyone’s time to try and become unstuck). Devs love to talk about challenges they overcame and solve other people’s problems. The key is to spot when this occurs and suggest that these conversations can be had later.

The sixteenth minute is an interesting idea, as I mentioned above one of the main reasons for scrums dragging on is people trying to utilise the entire team to solve their impediments. One approach is to formalise an optional follow up chat for the required parties so they can continue to discuss issues without hindering the rest of the team. Keep an eye on how many of these follow up conversations are needed, too many and they rapidly take over the team’s morning.

Sprint Planning

I’ve found the key to Sprint Planning is preparation, this can often be an issue when the meeting is scheduled for first thing on a Monday morning! Before you walk into the planning session you should have:

  • The backlog prioritised
  • Details of the capacity of your team (or team members if you plan individually)
  • Details of how much work has been carried over from the previous sprint

If you don’t have this information to hand you’ll need to acquire it in the meeting, this is all information you should have access to through your planning software. Asking for it in the meeting wastes everyone’s time and leads to boredom and the feeling of wasted time, not how you want to start your sprint!

Armed with this data your planning meeting is simply a matter of dishing out the highest priority tasks to the team members who want to pick them up. Simple!

Sprint Review

The purpose of the Sprint Review is to demonstrate what you’ve built to the stakeholders and other interested parties, this can often be your Product Owner, Support Team or other devs. These people then have the opportunity to provide feedback which can be considered, prioritised and included in the project.

The problem I often find is that there are so many ideas, so many discussion points and so many questions that your one hour slot is barely enough to get through the first demo. So how can we address this?

  • Nominate a single person to make note of the suggestions, by making sure everything is written down no idea is wasted and they can be prioritised and scheduled with your Product Owner
  • Consider recording videos instead of live demos, the live demo is a bane of any presenter. By asking your devs to record videos in advance the demos are foolproof, there are no unexpected technical problems and you know exactly how long each video will last!
  • Save questions and feedback until the end of each presenter. We are geeks, not public speakers, give your developers a chance by letting them get to the end before bombarding them with suggestions on how to improve the task they’ve been slaving over all week!

Sprint Retrospective

A fundamental part of the Agile Manifesto is that teams should look at what went well, what they’re struggling with and adapt their processes to improve. Often this is done in a retrospective meeting.

When I first started doing retrospectives we followed a well known pattern

  • What went well?
  • What didn’t go well?
  • What should we start?
  • What should we stop?

We worked our way around the table and everyone was expected to come up with a suggestion for each list. I found these meetings rather unfulfilling, I often felt they descended into a rant about process, other teams, technology or whatever else was irking the group at the time. While I’ve no doubt that it was good therapy I didn’t feel like the hour achieved much.

Instead I’d suggest planning each meeting with a specific topic

  • How can we ensure that high priority support requests are picked up mid sprint?
  • Should we use Web API or WCF for our new API?
  • How can we get clear acceptance criteria for our upcoming bespoke development?

By stating what you hope to achieve from the retrospective in advance you stand a much higher chance of success! Ask your team for items they want to discuss, email around your proposed meeting topic with plenty of notice and ask your devs to bring their own ideas to the meeting.
Hopefully I’ve given a few helpful suggestions for you to consider. I don’t claim to be an Agile expert and I’m still struggling to follow lots of my own advice but every sprint we learn a little more… Isn’t that the point after all!?