Taking Back Control of Your Calendar

I had an epiphany the other day. I’ve long believed that the number of meetings in your calendar was some kind of function of the size of your organisation and seniority. As a people manager at a company of over 8000 people my calendar gets pretty full on (I dread to think what my boss’ looks like).

I was commenting to my wife while making a drink that I was invited to seven different meetings over the next hour. She rolled her eyes at the ridiculousness of an overpacked calendar and I was about to smile ruefully when something struck me.

Letting meetings pile up and sit as “tentative” is a sign of indecision, a source of stress, and disrespectful to the meeting organisers who have requested my help.

When I have a block of fifty or more meetings a day in my calendar no one looking at it (including me) can actually tell whether I’m busy or not. Good productivity comes from having a plan of what to do with your time, not making it up on the spot.

We need to start owning our calendars not letting manage us!

Calendars should be a tool to keep us organised not a source of stress. So what did I do?

  • Immediately declined any meeting I wasn’t planning on going to with apologies
  • Ensured that I was never supposed to be in more than one place at once
  • Scheduled time to actually do the tasks I needed to do

This turned out to be an extremely therapeutic exercise, one I plan to repeat each week going forward. Time will tell if it leads to less indecision and procrastination!

What are your tips and tricks for managing your working day? How do you deal with excessive meetings?

Getting Things Done Book Review

I was recently talking with a colleague I respect greatly about personal organisation. He said he’s a great believer in the GTD method. I raised an eyebrow, it sounded like some kind of car maintenance routine. But, when someone who seems to always have his eye on any number of spinning plates throws three letters in your direction I find it’s a good idea to listen.

A little googling led me to a Todoist blog post and then onto David Allen’s book Getting Things Done. I felt my eyes had been opened.

As a developer I’ve grown up with scrum and kanban. When I moved into management I started creating ToDo lists but no matter how hard I try they always seem to fall out of date. David Allen’s book, although fairly exhaustive really opened my eyes to a better way of working.

I don’t want to go into too much detail on the GTD methodology, there are far better resources out there and the book itself is very comprehensive. However, there a few nuggets in there which are too good not to share.

The first revelation for me was that your inbox is not your ToDo list. It’s a capture tool, used for recording every commitment you make and idea you have. Allen’s approach is to frequently empty your inbox by actioning, scheduling, or delegating tasks. The same principle applies to an email inbox. Don’t let it build up, move items out into Archive or Action Required folder so you’re not digging through thousands of messages for the ones you need.

The other ideas I liked were the concept of Agenda projects to keep track of topics to cover in specific meetings and using a Waiting folder for work you are tracking but have been delegated to other people.

Hopefully this has given you a taster. If, like me you find actions slipping through the cracks or found time wasted while you were looking for the next task I’d highly recommend the book.

It’s quite iterative, the first couple of chapters contain most of the secrets. These are then expanded upon and developed in subsequent pages. It’s also very… almost technology phobic. I appreciate that the methodology should be tech agnostic, something you can do with a pen, paper, and a few folders. But in this digital first world I’d have started with a technological approach. However, the Todoist post I shared above gives a very practical guide of how to implement it using their (frankly outstanding) software.

I’m a few weeks in and so far I’m a big advocate!

Why Do Most Projects Finish Late?

I’m currently reading Agile Estimating and Planning by Mike Cohn, one of the things he discusses in the early chapters is why so many of projects fall behind. Many of his ideas fall in line with Eli Goldratt’s thinking in The Goal and what is described as The First Way in The Phoenix Project.

Mike discusses Parkinson’s Law which postulates that work will always take the time allocated to it. In other words if you’ve got a project and a deadline you won’t finish early because you’ll use the remaining time to refine, improve, and polish the work.

He also discusses an idea raised in The Goal where tasks are dependent on each other. In Eli Goldratt’s book Alex realises the importance of interdependencies when he takes the boy scouts walking through the woods. Cohn uses the example of developers and testers and how the person doing the QA cannot begin until the functionality has been created.

If you combine these two theories you realise that if tasks only run late, never early and the subsequent tasks can never begin until the previous one has finished you end up with an ever slipping schedule. If each task misses it’s deadline 10% of the time then once you’ve multiplied up by the number of tasks the probability that your delivery will run late climbs very quickly towards a statistical certainty!

So what can we do about this? Build in contingency? This is a risky strategy as if the team (or even you) know that there’s flexibility built into the schedule then the work will continue to consume all available time.

One approach I’ve heard a lot more recently is to allow projects to be constrained by either time or feature lists but never both. In a time based approach functionality is ranked in order of importance and when the clock runs out the project is delivered (regardless of whether or not all features are complete). In a feature driven release (for example in a lean MVP project) then the team will continue to work until all features have been completed – regardless of how long that takes.

Personally I’m much more of a fan of the first approach. By keeping this prioritised list transparent with the clients and stakeholders you can work on exactly the right work. Your work is cut when the time runs out (rather than adding low value features and running over) and one of my favourite reasons for adopting it – if a project does take less time than you expect your client gets more functionality for their money instead of feeling ripped off by inflated estimates, that’s something I’d certainly appreciate as a customer!

What do you think? How do you agree on project deadlines and commitments?

Who Should you Share your Backlog With?

I mentioned last week that I’d been to an interesting meeting discussing (and challenging) a few of the Scrum ideas. One of the questions we asked was about Backlog Visibility.

We all agreed that by making the backlog visible to our customers would invite feedback and reduce the likelihood that we’d spend time developing features which gave little value to our clients (see last week’s post on exactly what “value” is). But we couldn’t shake a distinct unease at making our plans visible, it took quite some discussion to articulate why this was.

Many customers and businesses work to project plans (ours included), they have long term visions of where they want to be and what they want to achieve. We realised with some trepidation that by sharing a fluid and ever changing backlog list of ideas with these customers it would create an expectation, which upon the next meeting would lead to disappointment as we had to admit the features we discussed were never in fact developed.

In an Agile environment we want to encourage this change, we want to update our backlog to remove unnecessary items and reorder the list to put the maximum value items at the top. However when presenting this to a client it could undermine us, particularly if the customer liked several of the items we ultimately decided to prune.

We decided that this is where clear communication is key. We talked about making it extremely clear that these were items we were considering working on and not a definitive product plan. A Development Forecast we suggested, after all – no one blames the weatherman if the reality turns out to be slightly different to our expectations!

What are your thoughts, do you share your Product Backlog with your clients? How do you handle their expectations that some work they are hoping for may still be cut?

The Cost of Fixing Bugs Late

Have you ever been in a situation where you’ve written a piece of code and it’s almost there but there are a few niggling issues which “can wait for V2?”. Have you ever wished, maybe months later that you’d gone back and resolved them at the time? You’re not alone!

More and more developers and managers are beginning to realise the true cost of fixing bugs late, not only on their sanity but on the company’s time and money.

Let’s say for the sake of argument that you’ve finished a piece of code except for a few edge cases. You’re under a lot of time pressure so you decide it’s of a high enough quality and you push it through to QA. The cost to the business of fixing that issue actually increases dramatically the longer it’s left.

Let’s look at a few scenarios:

You fix the issue immediatly

  • Fix time – 30 minutes

You send to QA and it’s rejected there

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Fix time – 30 minutes

QA passes as an “edge case” but the customer disagrees

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Deploy to UAT – 3 hours
  • Customer UAT – 1 week

Missed in UAT but discovered as a significant issue in Live

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Deploy to UAT – 3 hours
  • Customer UAT – 1 week
  • Customer live deploy – 3 hours

 

As you can see, even with these rough estimates the time it takes to resolve the customer’s issue increases dramatically the longer it’s left. Not only that but the cost of time to the business of paying staff to run extra QA cycles or rounds of UAT spirals out of control. We haven’t even considered the final case where it goes to the bug queue to die and a completely different developer has to learn the feature and resolve the problem.

The graph is actually a very common shape*.

graph-2

I’m a big fan of practicality, no software is going to be perfect and it’s unrealistic to expect that you’re going to find and fix each and every issue before shipping to a customer.

However, hopefully this has made you stop and think and has provided a strong argument for making that extra 30 minutes to resolve the issue before it causes a real headache for the business!

*Thanks to fooplot.com for the graphing software.

Agile Planning vs Planning for Agile Teams

I’ve been reading Agile Estimating and Planning by Mike Cohn recently, one of the ideas introduced in the first couple of pages is that Agile Planning is very different to planning an Agile Project.

Mike explains that as you progress throughout the project the amount of uncertainty diminishes. He calls this the Cone of Uncertainty and argues that you should continuously revise your plans as you revise your priorities.

In an agile project change is embraced and priorities are adjusted so the team are always delivering maximum value to the business. The idea of agile planning is that your plans should develop as this uncertainty reduces.

For example:

  • Sprint 1 – We estimate this is about 12-18 weeks’ worth of work
  • Sprint 2 – we’ve done some of the initial R&D and underestimated several of the user stories. We now believe the total project will take 22 weeks
  • Sprint 4 – We’re happy with our 22 week estimate at the moment but we’re getting into some of the big refactors at the moment, we’ll confirm in a few weeks
  • Sprint 6 – the majority of the work is behind us and we actually got some quick wins. We are now aiming to deliver on the 20 week mark
  • Sprint 8 – we’ve mostly finished and will be delivering on the 20th week
  • Sprint 10 – your install will be on Tuesday 15th

As you can see progress reports and updates are continuously being fed back to the stakeholders but these are updates and estimates which refine with time rather than hard deadlines before the work has even begun.

I’m very intrigued by this idea. I do have my concerns how it would work when delivering to a 3rd party – several of our customers pull large testing teams together for UAT testing of our software. I’m not sure how they’d react to a “it’ll be 12-18 weeks” but I’m certainly interested enough to continue reading!

The Importance of Testing Early

I recently had a conversation with a Development Manager at a company based in Leeds. We were discussing when to involve the QA Team in a release we were planning, I argued that there was little value in wasting the QA guys’ time until we were feature complete. After all, everything was still subject to change and they’d only have to repeat those test again at a later date.

Ironically I now hold the opposite view.

If you walked up to me today and asked at what stage of development you should bring QA resource into a project I would always advise that as soon as the developers start coding it’s too late.

Your QAs are not automated test machines, I can crank out a few Selenium scripts to test a UI during my lunch hour! Your QA team are there to ensure that the features you deliver are the highest quality they possibly can be. So when does quality begin? I would argue in the design phase!

I’m currently working with a QA who, for a variety of reasons is trying to work out all a feature’s permutations eighteen months after the design was originally done. He’s documenting these, generating Functional Tests for them, and raising bugs where required. This is incredibly time consuming and takes lots of time from him, a development resource, and the Product Owner. Imagine if he’d had the opportunity to work this out before development work had begun!

The key here is to allow you Product Owner, QA, and Developer to create the spec together. The developer sets to work and the QA begins creating their functional tests, as soon as the feature is code complete your QAs are ready to go!

So, my original concern was that our testers would have to continue to test over and over again. Yes, this is a risk, however, when would you rather be alerted to any issues… as the developer is adding finishing touches, lining up buttons and tidying Unit Tests, or six weeks after they’ve finished? I know which I’d prefer!

This is where the distinction between Functional Tests and Signoff Tests becomes important. Functional Tests are used to test every permutation of a feature, to verify it against the spec, and to perform regression testing after substantial change. Signoff Scripts are to protect your critical functionality. Use your Functional Tests early to ensure that the newly created feature behaves according to spec, use your Signoff Scripts to verify your functionality before a release.

Get your QAs involved in your spec documents, organise your Sprint so they create tests while the developer codes, and get timely feedback on your features while you’re still in a position to fix them.

Setting SMART Goals for your Team

Annual reviews are often one of those things we do as a box ticking excercise. It’s dull, time consuming and there are often more interesting geeky projects you’d rather be working on.

Words like SMART buzz around our brains for a few weeks and then are promptly forgotten (much like the goals) until the same time the following year when each goal is ticket off as “Done” or “No Longer Relevant”.

Surely there’s a better way?

Let’s look at what SMART stands for…

  • Specific
  • Measurable
  • Attainable
  • Relevant
  • Time Constrained

Your business may have different words but the jist is the same.

I want you to think about these words differently. Instead of writing goals for someone to achieve I want to think about User Stories. It’s reasonable to demand that any PO writing new features for the backlog should create them so they are Specific, have good Acceptance Criteria, be feasible to implement, be a valuable addition to the software, and should fit into a single Sprint.

In other words SMART is just another way of setting detailed and unambiguous instructions!

So now we know that SMART is simply a way of writing clearly let’s look at what goals you should set.

What do you expect your Team Member to be doing for the upcoming year?

This may sound like an obvious question but in my opinion annual goals shouldn’t all be about Continuous Personal Development or an arbitrary “I’d like to lean that” because these are the first things which will be abandoned when you have a tough day. Instead look at what your department goals are and propagate these through to your team.

If you have a major release coming up then set that as one of the goals. If you need to analyse system performance or memory usage then write it down. What you will find is instead of irrelevant targets which will be abandoned in favour of more pressing work your team will suddenly become accountable for getting the releases out to customers. Not only that, but they will see that their day to day tasks are being used to measure performance instead of the “Nice to Haves” which were abandoned as soon as the year hit a rough patch.

You will quickly find that most developers will much rather have a goal of “Create Offline Sync Mechanism as specified in Feature 123 before the end of January” than something vague like “Improve the Support mechanism”. For one thing there’s a lost less ambiguity as to whether it’s actually been achieved! Remember, less ambiguity means fewer awkward conversations when you come to assess goals, that has to be a good thing!

In conclusion, setting clear (or SMART) goals for your team which actually reflect the work they’re going to be doing day to day is a great way of getting investment in your department’s objectives and helping making those annual review forms much more relevant. Learning goals are good, but they shouldn’t be the core of a Team Member’s assessment.