The Increment and Definition of Done

The last couple of posts I’ve written have been about the Product Backlog and Product Goal then the Sprint Backlog and Sprint Goal. Today I’m going to focus on the Increment and it’s Commitment the Definition of done. These are slightly different to the previous two Artefacts and Commitments, the Backlogs and Commitments described above show the current progress of work and the vision of what that work will look like when it’s delivered.

The Increment is the piece of work which is being delivered to the customer at the end of the Sprint. These should be as small as possible, however it’s possible that it may be the entire Sprint’s worth of development. The Increment is the next piece of the product which has been added to create more value. Scrum.org has a nice defintion of an increment.

An Increment is a concrete stepping stone toward the Product Goal. Each Increment is additive to all prior Increments and thoroughly verified, ensuring that all Increments work together. In order to provide value, the Increment must be usable.

The Scrum Guide 2020

This is different to the previous to artefacts because it doesn’t describe the current state of what’s being worked on for the purposes of inspection, is is what’s being worked on.

Increments are added iterativly with each one meeting the Defintion of Done, this ensures that quality does not decrease during the Sprint. Photo by Startup Stock Photos on Pexels.com

However, as with previous artefacts it does come with a commitment. Where the Sprint Goal and Product Goal descibed the objectives we were working against the Definition of Done is a commitment to create transparency over what has been delivered.

The Definition of Done is a formal description of the state of the Increment when it meets the quality measures required for the product.

The Scrum Guide 2020

One of the biggest frustrations in modern software development is when something is “done” but exactly what that entails is a little hazy. One team may mean code complete, another may mean tested, another may mean deployed to a customer. The Definition of Done creates a shared agreement of what the team understands “Done” to mean. This prevents the awkward scenario where a stakeholder is expecting tested functionality but gets only code complete stories.

It also makes estimation easier because everyone has a clear understanding of what is expected.

The Definition of Done creates transparency by providing everyone a shared understanding of what work was completed as part of the Increment.

The Scrum Guide

If a team is working as part of a Nexus of Scrum Teams developing a single product, or if they’re part of an organisation providing components to another team then the Definition of Done should be shared to ensure that all teams share the same understanding of when something can be said to be Done.

If the Definition of Done for an increment is part of the standards of the organization, all Scrum Teams must follow it as a minimum

The Scrum Guide 2020

Teams which do not have a clear defintion of done experience friction when engaging with stakeholders, uncertainty when it comes to estimates, and ambiguity when planning out work.

A good Definition of Done should include items like:

  • Acceptance Tests Pass
  • Non Functional Tests Passed
  • Documentation Created
  • Deployment Pipelines Created

It may also include “deployed to customer” if a team is following Continuous Deployment practices. However, for the purposes of Scrum we commit that our increments will only be “Potentially Shippable”.

One final point, the Definition of Done ensures that quality does not drop as functionality is added. The Scrum Guide states that quality does not decrease during the Sprint. The Definition of Done helps us verify that each and every increment is complete and quality is maintained before it is declared complete.

Hopefully this has been helpful, I think it may be time to look at the Scrum Values!

So You Want To Be A Software Developer?

I’ve been working in Software Development for over ten years now. First as an engineer, then a tech lead, and now a manager. It’s an extremely exciting, challenging, and rewarding industry to work in but it can also be stressful and quite opaque from the outside.

I had the misfortune of leaving university in 2008, right in the middle of the financial crisis. I’d like to be able to tell you about rejection letter after disappointing email but the truth was more often than not I head nothing. I was one of the lucky ones, after three months of sending applications off onto the void I stumbled across a small company in Harrogate who took me on and saw my potential as a future developer. I’m immensely grateful to Bill, Joy, Pete, Chet and the others who invested in me and gave me that opportunity to show that this was an industry I could thrive in.

People looking to enter IT in 2021 are going to be facing competition just as difficult, if not more so than I did. I want to help. That’s why, over the next few weeks I want to blog and write my advice, suggestions, and advice for anyone looking to join the industry for the first time. I’m in an immensely fortunate position of having gone full circle from applicant, to engineer, to experienced hiring manager and this is my attempt to pay if forward for all the people who have helped me along my journey.

If you would like to receive this information then please subscribe to my blog and follow the Twitter Account. I would also like to set up a mailing list but, as that’s likely cost £££s I’ll wait until I’ve got a few people following along and feeding back to make sure I’m not spending purely for my own vanity.

The software industry is not what you see in the movies!

So what is working in the software industry actually like?

First, there’s a lot less creation of new software than you may actually expect. There are “greenfield” projects as we call them. But these are usually with either startups (which can be potentially risky) or an established company investing heavily in a new product. The majority of software roles out there are for established companies wanting to fix bugs and expand the functionality of their existing systems.

We rarely work alone. Most companies have teams of around seven people called Scrum Teams. These teams will contain a mix of developers and testers, most will also contain a representative from the business called a Product Owner.

Professional Software Developers rarely work alone, teams of around seven people are most traditional.

When most people think of development they think of websites and mobile apps because those are the most visible. However, unless you decide to specialise in web or mobile you’re much not likely to find a role building membership systems (my second job), warehouse stock inventory, or finance (my current job). Software is everywhere and there are IT jobs out there in sectors you haven’t even heard of yet.

I want to finish this post by asking you a question. I’ve interviewed more people than I can count and asked hundreds of questions in interviews, I want to give you practice answering these questions so you don’t get stuck when you find yourself on the phone or in an interview situation.

Given what you know what especially appeals to you about working in the software industry?

Think of your answer and let me know how you’d answer either on Twitter, via email, or by posting in the comments below.

I hope you found this post useful. As I mentioned above this is going to be the first in a series which I will aim to publish each Thursday. So please subscribe to the blog and follow me on Twitter, join my email list (when it’s available), and share it with anyone else you know who’s likely to be looking for a role in software development in 2021.

Leeds Testing Atelier VIII

Last week I was lucky enough to go to the Leeds Testing Atelier which was hosted, once again at the Wharf Chambers in Leeds.

This was the 8th Atelier and the fourth (I think) that is been to. If you’ve not been along before then I highly recommend it as a conference, it’s a very unusual meet up – partially because of the informality of the event (did I mention it was hosted in a bar/music venue) but also because of the wide range of topics and speakers. Although centred in testing, the organisers understand that quality comes from a wide range of interpersonal, technical, and communication techniques and they encourage sessions on these topics at the event. I debuted my Communication talk at the event, but more on that later.

The first talk of the day I went to was The Sleepy Tester by Hannah Prestwell. Hannah’s talk was inspired by a book called Why We Sleep by Matthew Walker, it’s a book I’ve heard from before and I really need to add to my reading list.

Hannah talked to us about the importance of getting enough sleep, the value of sleep in forming memories and learning, and it’s value in emotionally reflecting on recent events. It turns out the phrase “sleep of it” really is based in science.

The next talk I went to was Imposter Syndrome by Beth North. Beth had the outstanding idea of creating imposter personas to identify the different ways Imposter Syndrome can impact people. It was a great talk and really engaged a lot of people in the audience (myself included). I had the sudden urge to run out half way through and update my slides to include her great ideas.

I spoke downstairs next. My talk was entitled Performance Testing Your Communication and I spoke about various ways of monitoring and maintaining safety in a conversations as well as how to influence people around you by understanding their personality and values. I was quite pleased with how it went, especially as this was the first time I’d done this talk outside work and I was delighted to see the tweets roll in afterwards.

The final talk I saw (I had to head back to the office for the afternoon) was a lightening talk by Sophie Weston about lightening talks. In house presentations is a topic very close to my heart. Not only do I think they’re a great way to share knowledge but doing internal presentations was how I got started before I moved onto external conferences – I can’t think of a better way to boost your confidence. I’m definitely going to take a few of her tips back to the office to see if we can use them to improve ours!

The team stayed later, really enjoying their afternoon sessions and talks. I went back to an afternoon in the office but really enjoyed my morning – the organisers were a great high and really made me feel welcome and looked after (especially when I had projector woes).

A huge thanks to the Atelier Gang – I hope to see you all next time!

The Lock Complex

I have recently coined the term Lock Concept as a symptom of what many people call Fake Agile. Allow me to explain…

Waterfall development is often described with the Design, Development, and Testing phase structure. Many teams adopting Scrum tend to fall into one of two mistakes.

waterfalls
Photo by Trace Hudson on Pexels.com

The first mistake is to split up these into sprints. So Sprint 1 is for design. Sprints 2, 3, and 4 are for development and testing and bug fixing will go into Sprints 5 and 6. This isn’t Scrum. Clarke Ching uses a phrase I like in his book Rolling Rocks Downhill, he talks about GETS software. That’s Good Enough To Ship, at the end of each sprint the software must be production ready. By falling into the sprint phase trap you’re lowering quality between releases and not realising the value of scrum.

The second mistake teams make is to try and run each Sprint as a mini waterfall. This is what I now describe as The Lock Complex. Teams falling into this trap will design in the first few days, develop for a few more, and then test their work towards the end. Yes, the software is GETS at the end… but doesn’t this look like a waterfall on a smaller scale?

neptune27s_staircase_2017_head-on
Canal Locks of Neptune’s Staircase by aeroid CC BY-SA 3.0

The main symptom with this approach is people twiddling their thumbs (testers at the start of the sprint and developers at the end). While wasted time is frustrating, the real problem is the lack of shared knowledge and by unlocking that you can quickly raise your game towards Continuous Delivery.

The way to solve this becomes quite apparent if you look at the DevOps utopia we’re all told about. In a world of Continuous Delivery and automated approvals we create automated acceptance tests to ensure that our code functions as expected. If the feature doesn’t meet these automated tests then it will not be merged in, or if it has been then the deployment pipeline will stop.

In this world, not only are we deploying faster and achieving single piece flow but we’re breaking that Lock Complex. People are busy all the time and pair and mob programming becomes the norm. Instead of having a testing phase where it’s our QA’s engineers’ time to shine we have continuous collaboration and our quality specialists advising on the best tests and mechanisms to be implemented. Testers no longer run manual tests, we get computers to do that. Testers work to ensure that the automated tests give us a coherent test strategy.

If we can help our teams to break the Lock Complex and stop working in mini-waterfall sprints then we’ll see the benefits as people collaborate more and achieve better velocities and higher qualities as a result.

Testing Atelier

On Tuesday I was lucky enough to get tickets for one of my colleagues and I to go to the Leeds Testing Atelier. I’ve never been to one of these before but wow, the guys had worked extremely hard and created an amazing day!

There were two tracks (hipsters and nerds) throughout the day and it was action packed with different talks, topics, and workshops.

Before we got going however Clem led a group of us in a Lean Coffee session. I’ve never done one before (most likely because of my intense dislike of coffee!) but it’s definitely an idea I’ll be be trying out in team meetings at work!

I attended a couple of talks in the morning. The first was on Unit Testing best practices which I enjoyed, I got the chance to as a question on custom assertions which test multiple things (something I’ve been debating in my own head for a while). The answer by the way was “it’s ok, as long as your test continues to only test one thing” – a view I agree with!

Next up we’re a couple of short talks, one one using agile techniques to plan family life and other on website performance profiling. Both interesting and certainly talking points!

After a break Alex Carter spoke to us about the roles QAs can play in building the three ways of DevOps.

The three ways (in case you’ve not come across them are)

  1. Systems thinking
  2. Amplify feedback
  3. Continuous experimentation and improvement

img_7514

It turns out that a QA is key in making this work. They’re the quality gatekeepers, they challenge processes to build quality in at all stages and act as the team’s safety net when risky changes are made. If you’ve never run through this in your head (or even better your team) then I highly recommend you do!
Lunch was pizza, in fact huge amounts of pizza! Then we headed upstairs for some QA based fun and games (some seriously difficult interviewing and spot the differences).

img_7515

My final session of the day was a panel session on continuous delivery. The guys answered questions on everything from getting started to business challenges. There was a chance to ask questions at the end.

In summary the Leeds Testing Atelier was great. It was informal, informative, and had a great atmosphere with people willing to share experiences and ask questions. I’d like to thank the sponsors and organisers for all their hard work. If you’ve not been to one of these before then I’d highly recommend going to 2018 – I know I will be!

Have you tried At Desk Testing?

Last week I wrote about the value of finding issues early. How it becomes increasingly expensive and time consuming to fix issues the further down the development lifecycle you get. With that in mind we can now appreciate that anything we can do to find bugs earlier makes our software not only better but cheaper to develop.

Something we’re trialling at the moment are at desk demos. The idea is simple, before signing off a piece of work and passing it onto the next link in the chain (Dev to QA, QA to Support Analyst, Support Analyst to Dev and so on) you demonstrate the issue or feature to them.

For example, before I finish a feature and pass it onto someone who specialises in testing I invite my buddy over to “give it a bash”.

Remember last week? I talked about the time it takes to move from on link in this chain to another. I discussed how it can take a few hours to build your software, another hour or so to deploy it, and a day to run the signoff scripts (obviously this varies if you’re fortunate enough to be working on a ‘modern’ solution or have invested in some proper CI). Time moving backwards is time wasted, if you can avoid rework then you should always take the opportunity to do so!
By offering up my work to the QA for a few minutes before formally handing over can save hours of wasted time. These guys know what they’re looking for and can often find edge cases and give feedback on a few scenarios you’ve not considered. By having these pointed out to you early you’re saving all this extra time!

The same theory can be applied to a Support Analyst demoing bugs to a Developer rather than just recording replication steps on a ticket or a Developer showing a bug fix to the same analyst before shipping it to a customer’s UAT environment for testing.

So far it’s working well for us. Do you demo before handing over? Do you feel it works for your team?

When and Where to Automate Testing

A year ago I undertook an interesting piece of R&D to write Selenium tests for our main UI. I watched the pluralsight course, learned the difference between WebDriver and the IDE, and started building my Page Object Model. My simple test took the best part of three weeks to build and executed in around half the time a decent QA would take if you gave them a double shot of espresso.

I patted myself on the back, demonstrated the work to our Product Owner, and then advised that we should shelve the work because I wasn’t confident to hand over the mind boggling complexities of waits, framesets, and inherited view models into general practice.

Over time I lost confidence in the project. If it had taken me days to generate even the simplest of tests how would a junior developer fair when asked to automate complex financial scenarios. Quietly I put the idea to the back of my mind and concentrated on other more pressing matters.

That was until last week. We’ve been doing some work on our BACS integration and as part of the regression test I’d enlisted the help of one of our QAs to mock each response code and import it into the system. The process was tedious, repetitive, and I hated myself when I had to tell him he’d missed a vital check off each scenario.

As I was speaking to him cogs in my head began to turn. I’d gone off the idea of large scale test automation because of the complexity of our UI but the BACS processing system doesn’t have a UI. I could knock something together in a couple of hours which would create customers, mock BACS files, and schedule our JobServer. Even more powerful, if I used a technology like SpecFlow our QA could write the tests he wanted, I could automate them, and we’d be able to iterate over every scenario within a couple of minutes. Even more exciting was the idea we could send the feature files off to our Product Owner and banking partner and ask them to verify the behaviour was correct.

Later that week, after we’d proven our project to be a success and found a handful of low priority bugs to be corrected I started to wonder why this automation project had delivered such value where the previous UI automation had failed. I decided this was because:

  • The BACS process was an automated mechanism already so the test automation steps were simpler
  • The UI had been designed for human use, it wasn’t a chore to run through but it was complex for Selenium to navigate the controls
  • Mocking and importing BACS files was repetitive, slow, and tedious

The project turned out to be a huge success, we’re already planning on how we can expand the solution to cover other payment integrations such as SEPA.

The next time you’re considering whether or not you should automate an area of testing consider the nature of the tests. Do they use complex UIs? Do you have to repeat very similar tests over and over again? Do they currently take up a large portion of your testing cycles?

Try to automate the hidden process which run over and over and you wished you could test every time if you had the time!

Creating a Valuable Signoff Process

We all have them, a series of tests which need to be run before a build is considered good enough to be released to a customer. You may have different scripts for different stages of release (Alpha/UAT/Live), you may have different processes depending on the severity of any bugs found. Whatever your process I’m sure we can all agree that your signoff process is one of the most important pieces of your development puzzle and one of the most vital to get right in order to avoid issues further down the line.

But what makes a signoff process good? These are the things I believe a good signoff process needs to achieve:

  • Detect all unknown bugs in all critical features
  • Quick and easy to complete

Clearly these are at odds, if you want to find every bug then you’ll need to invest significant time! Let’s look at why each of these are so important before we try to find a solution.

The first one is fairly obvious, the most critical parts of your system are the ones which will generate the most urgent Support Tickets. In order to minimise those stressful late nights we need to validate that those areas of the system are as robust as possible.

But why the speed? Why not simply run through every test and permutation before each release (assuming you don’t mind driving your QA team mad)? In this world of Agile Development and quick turnarounds it’s becoming more important than ever to test and release quickly. After all, I mentioned in my previous post about Sprint Planning I suggested that you should aim to both develop and test your new features in each Sprint. In order to maximise your development potential you need to make your signoff process as efficient as possible.

So how do we do this? The key is to target your testing effectively. You need to work with your Product Owner to identify the Critical Functionality which must not fail, these form the basis of your signoff scripts. Other areas of the system can be tested on a rotational basis or when changes are made.

This prioritisation of the most critical functionality guarantees that the vital happy paths are always tested. This leaves more time for the QA team to expand their efforts into other areas while the developers are coding new features. By targeting your signoff scripts you can guarantee a high quality build without the lengthy delays which come from a bloated signoff process.

The Importance of Testing Early

I recently had a conversation with a Development Manager at a company based in Leeds. We were discussing when to involve the QA Team in a release we were planning, I argued that there was little value in wasting the QA guys’ time until we were feature complete. After all, everything was still subject to change and they’d only have to repeat those test again at a later date.

Ironically I now hold the opposite view.

If you walked up to me today and asked at what stage of development you should bring QA resource into a project I would always advise that as soon as the developers start coding it’s too late.

Your QAs are not automated test machines, I can crank out a few Selenium scripts to test a UI during my lunch hour! Your QA team are there to ensure that the features you deliver are the highest quality they possibly can be. So when does quality begin? I would argue in the design phase!

I’m currently working with a QA who, for a variety of reasons is trying to work out all a feature’s permutations eighteen months after the design was originally done. He’s documenting these, generating Functional Tests for them, and raising bugs where required. This is incredibly time consuming and takes lots of time from him, a development resource, and the Product Owner. Imagine if he’d had the opportunity to work this out before development work had begun!

The key here is to allow you Product Owner, QA, and Developer to create the spec together. The developer sets to work and the QA begins creating their functional tests, as soon as the feature is code complete your QAs are ready to go!

So, my original concern was that our testers would have to continue to test over and over again. Yes, this is a risk, however, when would you rather be alerted to any issues… as the developer is adding finishing touches, lining up buttons and tidying Unit Tests, or six weeks after they’ve finished? I know which I’d prefer!

This is where the distinction between Functional Tests and Signoff Tests becomes important. Functional Tests are used to test every permutation of a feature, to verify it against the spec, and to perform regression testing after substantial change. Signoff Scripts are to protect your critical functionality. Use your Functional Tests early to ensure that the newly created feature behaves according to spec, use your Signoff Scripts to verify your functionality before a release.

Get your QAs involved in your spec documents, organise your Sprint so they create tests while the developer codes, and get timely feedback on your features while you’re still in a position to fix them.

Multiple Binding Attributes in SpecFlow

I recently discovered something rather nice in SpecFlow, I was implementing a scenario like this

Scenario: Save a user
Given I have a user with the name Joe Bloggs
When I save them
Then the user should have a FirstName Joe
And the user should have a LastName of Bloggs

I wanted to provide the flexibility in the assertions so our QA could decide how he wanted to phrase the text in the scenario. Logically however we’d want the same binding for each variation.

Here’s what I came up with:

[Then(@"the user should have a (.*) (.*)")]
[Then(@"the user should have a (.*) of (.*)")]
public void ThenTheUserShouldHaveA(string field, string value)
{
  var user = GetUser();
  Assert.AreEqual(user.Properties[field], value);
}

However this didn’t work, I kept getting a field of “FirstName of”. I discovered however that you can reverse the binding attributes to give priority.

Updating the attributes to

[Then(@"the user should have a (.*) of (.*)")]
[Then(@"the user should have a (.*) (.*)")]

This change gave the of binding precedence and ensured that both scenario steps worked correctly.