Rolling Rocks Downhill Book Review

I read Clarke Ching‘s book Rolling Rocks Downhill a while ago and I picked it up again recently while on holiday. I’d enjoyed it the first time, going back over the story a second time with a little more leadership experience I was amazed just how many valuable lessons Clarke has managed to cram in there!


The book is a Business Novel and follows Steven, the Development Manager at Wyxcomb Financials who discovers that a competing company has stolen their idea and is determined to beat them to market. Their project on the other hand is behind and likely to slip even further. With their company’s future at stake Steven must develop a new approach to development to save the day.

What follows is a story of a team discovering Agile ideas and practices for themselves.  Not because they have a guru preaching at them, but because they’re trying to solve real business problems. I couldn’t help chuckling at some of the discussions Steven has with his mother, the Product Sponsor, and even the Cafeteria Manager as he learns about Iterative Releases, Product Backlogs, Continuous Testing and the Theory of Constraints.

After all can you really understand the advantages of building a small number of features in an iterative manner if you’ve not heard about the French Fry Revolution!?

An easy read and a great book to really reinforce some of the ideas of scrum you already know with real world examples. Recommended to anyone!

Is Proper Project Management the Waterfall We’re All Afraid Of?

I recently watched a Pluralsight course on Project Management for the Software Engineer. My role has changed recently and I’m finding myself far more involved in planning the team’s big picture. I hoped with a few more ideas and skills under my belt I’d stand more chance of rising to the challenge when things get tough.

Having watched the video (very good by the way, I’d highly recommend it) I came away with an uneasy feeling that this “Project Management” stuff was simply Waterfall under another guise. As developers we’ve been so bombarded over the last few years with the message that Waterfall is antiquated and Agile is the new methodology and that all notion of long term planning and detailed requirement documents must be stamped out.

Needless to say that when I watched the video explaining the importance of a proper project plan and detailed specifications I was rather taken aback.

Surely we’re not preaching a return to the old days of strict deadlines and work planned out months and months in advance?

It is at this point that I have to raise my major frustration with Agile and Scrum in particular. Everything I’ve read and every speaker I’ve listened to advocates a prioritised backlog of work which the team will work through to deliver maximum benefit to the business.  This is great for a startup, or a team developing a solution where they are continuously refining a project to deliver more and more value.

In my experience however the business process is rarely so clear cut. For example, I have a deadline looming, several customers have paid for dozens of new features to be included in our application and the sales team have committed to a UAT delivery date of the end of June. Suddenly I’m not working to a prioritised backlog, I’m working to a project deadline and failure to meet it could have serious repercussions!

So how do we combine these two approaches? How can I maintain the ownership and agile nature of a scrum team while carrying out the long term project plan necessary to ensure a successful delivery? Is it possible or are the two mutually exclusive?

Let’s look for a moment at what the Project Management approach promises us

  • A clear set of deliverables agreed by the client
  • A clear list of delivery dates for stakeholders and other teams in the business
  • Easier control by planning work up front allowing us to track progress and spot issues arising

Is there a way we can keep these benefits without sacrificing the agile nature of scrum?

A Clear Set of Deliverable agreed by the Client

Let’s take this one one first, Agile does not mean wishy-washy requirements, it means breaking down big visions into manageable and deliverable chunks. While a client may envisage a huge monolithic project it is the job of the project team to break this down into small, simple user stories which can be delivered in manageable pieces.

Where an analyst may claim that a multi-document specification contains every use case and scenario they system may encounter I’d be willing to bet significant money that they’re wrong. Furthermore, by the time you deliver this enormous project the business value for it may have waned dramatically (I’ve seen this happen myself). It is undeniably better to break your huge requirements down into components which can be delivered for continuous feedback as you go.

A Clear List of Delivery Dates

Assuming you are working in a scrum team the agile process does not obscure delivery dates, in fact it embraces them. Rather than setting an arbitrary deadline months and months in advance and gearing the resource and financial plans towards it continuous and iterative releases increase the reliability of hitting deadlines, after all – it’s less likely you fall significantly behind over a two week period than a two month one!

Use your sprint end dates as your deadlines, deliver frequently into a sandpit environment so the client can see your progress and begin testing as early as possible. Big deadlines are a lot less stressful when you’re on the nth iteration!

Upfront Planning for Clear Progress Tracking

As a Project Manager you are going to be continuous asked whether a project is on track to meet the business deadlines, I’ve not tried it myself but I’m willing to bet turning around to your client and saying “We don’t have deadlines, we’re agile…” Probably isn’t the best approach.

When we create our backlog we break our features into User Stories, each is estimated and prioritised. At the beginning of each sprint the highest priority items are selected to be worked on. From a project management perspective is that other, “more important” stories may be moved into the sprint in place of your items.

However in a business environment we need to plan releases and deliverables several sprints in advance. as a Project Manager we need to ensure that our work is completed on schedule and not simply pushed back until the 11th hour.

My suggestion here would be create a project plan which covers which sprint each User Story should be completed in, these should be negotiated ahead of time with the Product Owner. At the beginning of the sprint the work which was planned in is moved up to a high priority. On a quiet week the team may have capacity to complete your planned work as well as some of the other tasks, on a busy week your work may not be the most pressing (there may be customer support requests which take precedence for example). However, as with all things it is the Product Owner’s prerogative to decide which tasks will give the most business value yours, or the other challengers.

What this process allows is for you to plan out your control points well ahead of time. If you’re expecting certain tasks to be delivered at the end of sprints 2, 4, 5 and 6 then you can begin monitor these and verify that these targets are being hit. If they’re not then you can explain very clearly why they fell behind whether this was because a task took longer than expected or because a more pressing task came in.

This is the same process we already work in, resources can be reassigned at any time in a normal project. The advantage here is that the Product Owner formally balances the priorities of the business and gives reasons why the project must be allowed to fall behind.

Handling Scope Creep

As developers we’re constantly aware of the pressures of scope creep. A piece of work is designed, estimated and scheduled in. Then, as soon as the client sees it they have another idea and want a further feature to be added. In an agile environment we want to encourage this feedback, ultimately it helps us build software which better suits our customers’ needs.

From a business perspective this feature creep can be deadly. Budgets are drawn up and quotes delivered based on the original feature, if these prove to be inaccurate or underscoped then you are effectively delivering your service for free.

In order to avoid this it’s vital that a formal process for Change Requests is given ahead of time. If a customer feels that this is their one and only chance to refine the product to make it valuable to them then they will push and push everything into the same delivery. If however a formal CR process exists they know they can continue to work with the team and refine as they go. There’s a theme here, the emphasis needs to move away from a single monolithic and towards small, iterative and manageable releases.

In Conclusion

I believe that the two approaches of Project Management and Scrum are not mutually exclusive. In fact I believe both aspects are vital if you want to achieve anything other than a constant aimless meandering of features.

For the two approaches to work well together I feel there are few steps which must be carried out:

  • Break your project into manageable tasks and tentatively assign them to sprints with the agreement of your Product Owner.
  • Measure the progress of your project through it’s lifetime, ensure that if a task is not completed in the given sprint you understand why and stress the urgency to the PO that the ground is made up.
  • Embrace change and allow your customers to refine as the project is developed. Formalise the process and be clear that any change will be re-scoped and budgets will be updated.

I’m going to be attempting this approach in a few weeks for a release to our biggest customer. Hopefully this balanced approach will see us through.

My first foray into AngularJS

I’ve been aware of AngularJS for some time, a talk by Craig Norton at Agile Yorkshire peaked my interest but I’m ashamed to say that I’ve never invested the time to look into it. A few comments colleagues and blogs have made about breaking changes in V2 have put me off somewhat, is Angular turning into another Silverlight?

We’ve been developing an in house DevPortal at work and a colleague of mine was very keen to use Angular and Bootstrap to investigate their value in bringing into our core product.

Over the past few weeks I’ve been feeling somewhat left behind, unable to contribute to some exciting process changes because I wasn’t up to speed with the technology. Not being familiar (or particularly liking) my lack of understanding and knowing how it feels to be the only evangelist on a team I’ve invested a little time this weekend to try and understand the basics of Angular.

With the help of PluralSight I’ve started covering the basics, I’m not very far through yet but can already remember what intrigued me at Craig’s talk. This is HTML and Javascript, but built in a powerful framework you’d expect from a .NET application.

This is the Hello World of AngularJS, we create a module and a controller and bind the value of the helloMessage to the variable defined in $scope.

<!doctype html>
<html ng-app="app">
<head>
</head>
<body>
<h1 ng-controller="HelloWorldCtrl">{{helloMessage}}</h1>
<script src="https://code.angularjs.org/1.4.0/angular.js"></script>
    <script type="text/javascript">
      angular.module('app', []).controller('HelloWorldCtrl', function ($scope) {
$scope.helloMessage = "Hello World";
})
</script>
</body>
</html>

I’m looking forward to seeing what’s in the second module!

Multiple Binding Attributes in SpecFlow

I recently discovered something rather nice in SpecFlow, I was implementing a scenario like this

Scenario: Save a user
Given I have a user with the name Joe Bloggs
When I save them
Then the user should have a FirstName Joe
And the user should have a LastName of Bloggs

I wanted to provide the flexibility in the assertions so our QA could decide how he wanted to phrase the text in the scenario. Logically however we’d want the same binding for each variation.

Here’s what I came up with:

[Then(@"the user should have a (.*) (.*)")]
[Then(@"the user should have a (.*) of (.*)")]
public void ThenTheUserShouldHaveA(string field, string value)
{
  var user = GetUser();
  Assert.AreEqual(user.Properties[field], value);
}

However this didn’t work, I kept getting a field of “FirstName of”. I discovered however that you can reverse the binding attributes to give priority.

Updating the attributes to

[Then(@"the user should have a (.*) of (.*)")]
[Then(@"the user should have a (.*) (.*)")]

This change gave the of binding precedence and ensured that both scenario steps worked correctly.

Hello Kitty

It seems like a weekly occourance nowadays, whenever you turn on the news or open a technical blog you’re faced with another story where a huge organisation has been hacked and personal information has been stolen.

This time it is Hello Kitty who have lost their members’ personal information. According to wired.com more than 3.3 million user accounts have been breached. Unfortunately this is not uncommon, over the last few years we’ve heard similar stories from Sony, Ashley Madison and Experian.

What strikes me time and time again when I read these articles is how poorly my personal information is protected. Let’s look at the Hello Kitty story, wired.com tells us that users passwords were hashed with SHA1 but not salted.

Any computer science graduate or junior developer should be able to tell you that this is insufficient! I consider myself a geek so I’m always interested by this sort of thing. Let’s spend a minute to work out what’s going on…

A hash is basically a repeatable one way encryption algorithm. If you use SHA1 to hash a piece of text (such as a password) it’s trivial to repeat but (almost) mathematically impossible to work out the original string from the hashed value. This is great for developers, I save the hashed value in my database and when you enter your password I simply hash the value you gave me and see if it matches my saved version. Neither you, me or any hacker can work out what your password is even if they have somehow stolen my entire database (which I’m also protecting between DMZs, firewalls and physical security).

At first glance the security used at Hello Kitty should have been enough. Because the hackers who stole the data can’t dehash the passwords they can’t use them to log in, try them on other sites (such as your email account) you’d think this information has limited value.

This is where there’s a gaping flaw in the security and unfortunately it’s down to the users of the site. Any hacker worth their salt (pun intended) knows the most commonly used passwords. Which means I don’t need to know your password, all I need to do is hash “123456” and see which of the 3.3 million users used it. Next I try “password”, then “12345”. You won’t match everyone, but how many of these 3.3 million will use one of the top 100 most common? The top million?

So what’s the solution? We use a salt. This is what Hello Kitty should have done, a salt is a random piece of data which is appended to the password. This is often a piece of random text unique to the user which is concatenated onto the user’s password before the hashing process. This time, even if the hacker has access to the salts their lookup table of common passwords is rendered useless. They would need to calculate the top passwords with the salt for each member, a this is a hugely expensive operation and we’re back to brute force. 

Some of the most recent algorithms involve hashing a password multiple times. The effort for us to hash the hash of a password a few hundred or thousand is negligible but our hacker? They’ll have to calculate the hash of every password they want to try, for each member individually multiplied by the number of times you iterate!

I don’t want to go any further into the technical side. What I take you to take away is that somewhere, someone made a decision at Hello Kitty that a basic SHA1 hash was sufficient. That’s not only a huge error of judgement but it has ultimately left their clients feeling vulnerable as passwords are distributed across the web and has destroyed their investor confidence as the company’s reputation is dragged through the mud. Don’t let this happen to your company, don’t let this happen to your users!

Here’s my message.

  • If you’re a developer and you are not confident in your system’s security, don’t be afraid of looking foolish. Raise the questions.
  • If you are a manager who’s reportee raises a security concern, take it seriously. Your entire teams’ livelihood and clients’ confidence is at risk.
  • If you’re an Internet user, look at the common passwords list and avoid them like the plague. Use strong passwords and don’t repeat them across sites. If you don’t feel confident, don’t hand over your personal information!

What’s new (and cool) in C# 6?

Many of the improvements in C# 6 are based around Rosyln, rebuilding the compiler and making it much easier to write Visual Studio plugins and doing fancy compilation in our own applications. This is all very exciting stuff but it doesn’t really impact me day to day. There are however two little features which I am using every day and have already slipped into my standard syntax.

The Null Conditional Operator

The Null Conditional Operator is game changing. Last year we’d have to perform a series of null checks to ensure that we weren’t going to receive a NullReferenceException when accessing the child properties of an object.


if(user != null && user.Communication != null && user.Communication.PhoneNumber != null)

{

  user.Communication.PhoneNumber.Call();

}

However with the introduction of the ?. operator we can do this all in a single line.


user?.Communication?.PhoneNumber?.Call();

If the user has both a Communication object and a PhoneNumber then call them.

This becomes even more powerful when combined with our old friend the ?? operator.


var numberToCall = user?.Communication?.PhoneNumber ?? DEFAULT_CONTACT_NUMBER;

Now if user or Communication are null instead of throwing a NullReferenceException they will evaluate as null. Now if either user, Communication or PhoneNumber are null the entire statement will return the DEFAULT_CONTACT_NUMBER instead.

String Interpolation

The new String Interpolation change is so simple but I’ve found myself using it almost exclusively since it became available.

In the past we’d write


var myMessage = string.Format("Welcome back {0}", user.Salutation);

This was fine, perhaps a little clunky when you had a lot of stings to merge in but we were happy. That was until I discovered the new C# 6 String Interpolation!


var myMessage = $"Welcome back {user.Salutation}";

Not only is this more concise, but it also eliminates the frustration we’ve all experienced when mixing up the position of the arguments.

Two simple changes, two huge reasons to use C# 6 now!

Why write Unit Tests?

Testing is a passion of mine and it’s something I expect to write about a lot more in the future. What I feel is discussed less often though is “Why are we writing tests?”

When many people talk about writing tests they talk about writing Unit Tests, classes and methods broken down into small, isolated areas of code which can be examined with tests to guarantee code quality. Let’s look at an example:

public string ReadText(string filename)
{
  if(File.Exists(filename))
  {
    return File.ReadAllText(filename);
  }
  else
  {
    return null;
  }
}

This code reads the text in a file and returns it, if the file doesn’t exist it returns null.

The thing is, this is trivial code. Writing Unit Tests for something like this is overkill surely? That thirty minutes or so could be invested in the next feature, a bug fix or meeting the tight deadline.

Let’s suppose another developer comes along in a few years time, they see this method and know something their predecessor didn’t. .NET has a FileNotFoundException exception! Deciding to be a conscientious coder they update the method to throw an appropriate exception if the file isn’t found.

public string ReadText(string filename)
{
  if(File.Exists(filename))
  {
    return File.ReadAllText(filename);
  }
  else
  {
    throw new FileNotFoundException(string.Format("The file '{0}' was not found", filename));
  }
}

Unfortunately our conscientious developer missed something. One of the myriad of methods which calls our methods is GetOrCreateFileWithContents

public string GetOrCreateFileWithContents(string filename, string defaultContents)
{
  var currentContents = ReadText(filename);
  if(currentContents == null)
  {
    currentContents = CreateFile(filename, defaultContents);
  }

  return currentContents;
}

Because of our change this method now fails, ReadText throws an exception and the new file is never created in it’s place. This may be an oversimplified example with an overzealous and (dare I say it careless) tidy up but it illustrates the risks we take every day when refactoring and improving code.

This is the true value of Unit Tests, not in finding bugs but in defining the behaviour of the method. If our original developer had invested that extra thirty minutes our Boy Scout would have had some warning when then tried to update the method, they’d have seen that they’d changed the behaviour of the class in an unacceptable way and wouldn’t have made their changes.

The moral of the story? Unit Tests may seem like overkill while you’re writing them. But spare a thought for the poor soul who’s trying to read your methods in a few years time… Leave them a map, a series of executable tests which guarantee that your required behaviour remains unchanged. Your thirty minutes could save them hours, prevent bugs being introduced and help keep your application stable.