Tech Talks at Work

One of the initiatives I sponsor at work which I’m most proud of are our weekly tech talks. The company had a long history of doing them, a small group of people would volunteer to talk about something for about an hour on a Friday afternoon. But the prep was hard work and there was no consistency. We could have talk three months in a row and then nothing for the next six. Worse still the same people always felt pressured into talking which seemed very fair on them.

Back in 2019 I went to the Leeds Test Atelier and attended a talk by Sophie Weston. She discussed what her company had done with tech talks and I was really impressed. She argued that the key requirements for any sustainable tech talks were:

  • Duration should be 20-30 minutes
  • The same time/location every week
  • DO NOT MISS A WEEK
  • Food

Inspired by this I went back to work and set about seeing if we could pull off something similar. Sophie had explained how important it was to keep the schedule going week after week. If you start missing weeks those odd weeks develop into hiatuses and then the entire thing stops. She also strongly advised bacon sandwiches but I didn’t have a budget so we relied on BYOB (Bring your Own Buttie) instead.

Over the next few weeks I set about pitching my idea and recruiting speakers. I wanted twelve. I figured that if I could find twelve people willing to give a talk and get them on a weekly schedule then it would be worth doing and I might stand a chance of the talks becoming a sustainable weekly occurrence. I got fifteen.

The Mic Drops began to take off. Photo by Christina Morillo on Pexels.com

We booked a meeting room and opened up Skype for anyone who wanted to dial in. We also made sure we recorded all the sessions so if someone couldn’t attend they could watch it at a later date.

At the time of writing we’ve had:

  • 60 consecutive mic drops pausing only for holidays
  • Over 20 different speakers
  • 20+ hours of recorded videos
  • 1700+ attendees

Even the global pandemic didn’t slow us down, we simply moved to 100% online!

So here are my steps for starting up your own weekly tech talks:

  1. Plan out your first three months in advance to make sure you have sustainability
  2. Hold your talks at the same time and place every week
  3. Unless for a specific reasons the talk and questions should take less than 30 minutes
  4. Record them
  5. Don’t limit to “Tech Talks” some of our best talks have been on sleep, agile, management practices, books, and communication skills. Encourage variety!
  6. Always have a back up speaker, try to have two
  7. Survey your department to find out what talks people are interested in
  8. Invite External Speakers
  9. Run workshops to coach the less confident speakers
  10. Anyone can talk and anyone can attend!

Have you run tech talks in your company? Did you follow a similar format to us? What worked well for you?

Team Topologies Book Review

I recently listened to the audiobook version of Matt Skelton and Manuel Pais’ book Team Topologies. It was so good I bought the kindle version while I was still halfway so I could make notes and highlight the good bits (most of the book it turns out).

Team Topologies

The book takes several principals such as Conway’s Law and really applies them to business teams. This is something I’ve seen first hand. When several teams work on one large product the codebase becomes decoupled if the teams are given ownership of particular components. However, if teams are expected to work across the codebase the solution becomes monolith and the teams become a super squad.

In the book the authors argue that there are actually a very limited number of team types in a modern organisation. I don’t want to list them because I’d strongly recommend you to buy the book and read the descriptions for yourself. However, if the various types it was the concept of platform team which intrigued me the most.

I actually think Matt and Manuel underplay the huge value of a platform team. They discuss brilliant ideas about consumable APIs and documentation for product teams which consume them. However, a data driven business like mine I believe we should run far more platform teams and far fewer product teams. If we want our Product Owners to be able to innovate and prove the value of ideas quickly we need our data sources and components to be as plug and play as possible. This allows any product or concept to be built and tested very quickly. If all these services were owned and managed by platform teams, instead of falling down the gaps between product teams the solutions would be more robust and the lead times far lower.

If you’ve never given any thought to how teams are created and assigned areas of ownership then this is a brilliant book to read. If you’re not sure how your teams communicate and share information then this book is essential.

Code Black

In total it took about 18 months to write Code Black, my recently published technical parable story. I’d originally had the idea in the summer of 2018 but it took a little time to properly outline the story.

Code Black

Instead of using a common format like The Hero’s Journey I used the various stages a team would progress through as they developed and refined their DevOps journey.

Whenever I write the first thing I do is try to outline where I want to go. This involved Mike being approached by his friend Bob (who was called Robert) at that point. Obviously he had to join the company and walk into chaos, I tried to describe a bad day we could all relate to.

As the team learns they begin to invest in more frequent releases. I wanted to explain as many of the good reasons why this was as good an idea as possible. The reduced technical risk, the reduced delivery risks, and the increased ability. I also wanted to discuss some of the common objections. Before moving onto discussing Continuous Delivery and Continuous Deployments and how using these techniques makes it less likely your sprints will fail and makes it easier to help your customer with your resorting to release branching strategies.

Once I’d outlined the story and had a basic idea of the characters it was time to sit down and write. In reality it only took a couple of months to create a first draft. Knowing where I am going always makes it a lot easier to put words in a page.

Once I’d finished writing I printed everything off and put it on a shelf for a few months. I wanted to forget as much as I could before I started proof reading so I could spot as many errors as possible.

Many of my colleagues found me over this period sat throughout lunchtime with a stack of paper and a highlighter pen. Believe me, I found a lot of things which didn’t make sense.

Once I’d corrected as much as I could it was time to publish. I’d already created my LeanPub account and in true agile style I decided it was best not to procrastinate and to start gathering feedback. The great thing about LeanPub is that it’s very easy to update your book in response to suggestions.

So that’s the story, I’ve now sold a handful of copies and so far the feedback has been very positive. I probably shouldn’t but I’m already thinking about what I should write next!

If you’re interested in picking up a copy of Code Black it’s on LeanPub now.

The Unicorn Project Book Review

When I first heard about The Unicorn Project I have to admit I was disappointed, I’ve long been a evangelist for Gene Kim’s book The Phoenix Project but I’d just spent months working on my own development DevOps book, Code Black.

I shouldn’t have worried, I really enjoyed The Unicorn Project and we’d gone down different angles. Where I’d focused more on the Continuous Deployment journey Kim’s book focuses much more on developer empowerment and continuous experimentation.

The Unicorn Project

The story follows Maxine, the developer who caused the now legendary payroll outage at Parts Unlimited towards the start of The Phoenix Project. Exiled to the documentation team as punishment she’s instructed to support the Phoenix rollout but quickly realised how woefully under supported the engineering teams are. As the business piles on more and more pressure, expects more features, and has less and less appreciation for the technical debt they’re wracking up they continue. Until, as we know the entire project explodes.

Working with some familiar characters such as Bill, Brent, Erik, and Maggie and a few new ones including Cranky Dave and Kurt our heroine works to make life better for the entire company. These are the engineers, the red-shirts, not the bridge crew. They’re the ones who actually do the work and they’ve got a lot of it to do!

What did I like best? Kim put lots of emphasis on testing and improving the entire system not just a small part of it, he focuses on collaboration and the importance of making it easy to onboard developers and share knowledge, and really drives the need to innovate and out experiment the competition. He also emphasis the importance of treating engineering tools as important systems and draws distinctions between the IT products we build, and the miscellaneous ones which just keep the lights on.

What wasn’t so good? Within a few chapters I was absolutely sick of Erik’s use of the word “sensei”… seriously can’t some of the people he quotes simply be experts, evangelists, or even gurus!?

On a more practical point the book spends a lot of time evangelising functional programming and scalability technologies. Which is great, they’re very powerful tools. But one of the things I liked so much about The Phoenix Project was how it was clear the team were struggling the same tech debt we all are. It made it more relatable and I worry in this book Kim’s “rip it out and use the latest and greatest” will overpower his more generic messages of continuous incremental improvement. Perhaps it’s personal preference but I like my DevOps books technology agnostic.

So, would I recommend this book? Absolutely without question! I believe that The Unicorn Project will take its place alongside its elder sister on the bookshelves of developers, testers, managers, product owners, and operational engineers for the next decade. If you haven’t already go and buy it, any while you’re at it not get a copy of Code Black too!? 🤣

Why Branching still has a Place in the Agile World

I’m sure every developer out there would love to have a single code base with builds which are automatically tested and pushed out to customers. However, let’s assume for a moment that you’ve not got a full CI system with triggered builds, automated testing, and thousands of automated deployments a day.

For the sake argument let’s say you’ve got some of these pieces in place. Perhaps you have nightly builds? Maybe even some unit tests! But if you’re like most of the companies I’ve worked with you still run manual signoff scripts and have a couple of guys who do the deployments, know every config setting by heart and can get your application to run in all kinds of new and innovative ways.

Despite what The Phoenix Project tells us deployments to customers are still a big deal and version management (finding a way to release bug fixes but not new features) is a fact of life.

If your customers are anything like mine they’ll be incredibly nervous about taking new features. They want lengthy UAT phases and opportunities to train their staff on new functionality. This may seem like Waterfall to us, but remember many of our clients have stakeholders in some very big business. In my industry (Health and Leisure) for example we have code freezes over the January period while New Years Resolutioners and marketing campaigns are at their peak. Very few businesses would open themselves up to the risk of IT failures during these critical times.

And yet support contracts and maintenance doesn’t stop. The busy times are when the software is really put to the test and we must be able to respond to any issue which may arise.

This is why we use maintenance branches.

Our job as developers and IT professionals is to deliver a good service to our customers. We need tools and processes to do this. Agile and Continuous Integration are two of those tools but if they don’t help us meet business needs then we should seek others which do.

My customers tell me that they want to be able to take patches quickly if required. That they want to be able to access fixes without the need to test and learn new functionality.

Techniques such as feature toggles help but in my view the only way to truly meet this need is to cut a release branch after each feature (or for convenience block of features) is completed. We usually use a minor version to represent this. Using this model we can support customers without surprising them with new functionality and continue to develop knowing that we can put out maintenance releases of older version at any time.

Controversial? Perhaps, practically over purity… I hope so. What are your views or release branches and support customers with on premise installations?

Microsoft HoloLens

I was lucky enough to get tickets to see Mike Taulty talk about Microsoft HoloLens at Leeds Sharp last week. I didn’t really know anything about augmentated reality or virtual reality but in the interests of Staying Technical I was definitely interested to find out a little more.

Wow!

It turns out that the HoloLens is a Windows 10 PC designed to be worn rather than sat at. It runs apps from the windows store without modification (although Mike did stress that typing furiously in midair is somewhat less precise than using a keyboard). Instead of sitting on a screen the windows floats in front of you, hovers on a wall, or rests on a table.

3D apps can be build using Unity 3D (in fact Mike showed us how). I was really impressed how easy it was to work out what the user was looking out and listen to what they were saying (although I imagine Microsoft Cognitive Services could be used very effectively here).

Can I imagine us all walking around with these things strapped our head? Probably not… at nearly £3000 for the developer version these are likely to be out of the consumer budget (although Microsoft are working with partners at the moment to develop other models). But in a commercial environment? For architects, engineers, surgeons or even as a stepping stone onto the next generation devices which may weigh little more than regular sunglasses? I could see that!

Am I rushing out to buy a HoloLens? Probably not,  it would I seriously consider an Acer or Asus model for a couple of hundred pounds… that’s a much harder question to answer! I can imagine little spaceships flying around my living room or a huge troll sat on my sofa playing chess with me – the sky really could be the limit with these devices!

The Cost of Fixing Bugs Late

Have you ever been in a situation where you’ve written a piece of code and it’s almost there but there are a few niggling issues which “can wait for V2?”. Have you ever wished, maybe months later that you’d gone back and resolved them at the time? You’re not alone!

More and more developers and managers are beginning to realise the true cost of fixing bugs late, not only on their sanity but on the company’s time and money.

Let’s say for the sake of argument that you’ve finished a piece of code except for a few edge cases. You’re under a lot of time pressure so you decide it’s of a high enough quality and you push it through to QA. The cost to the business of fixing that issue actually increases dramatically the longer it’s left.

Let’s look at a few scenarios:

You fix the issue immediatly

  • Fix time – 30 minutes

You send to QA and it’s rejected there

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Fix time – 30 minutes

QA passes as an “edge case” but the customer disagrees

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Deploy to UAT – 3 hours
  • Customer UAT – 1 week

Missed in UAT but discovered as a significant issue in Live

  • Fix time – 30 minutes
  • Build time – 2 hours
  • QA deployment time – 1 hour
  • QA signoff testing – 24 hours
  • Deploy to UAT – 3 hours
  • Customer UAT – 1 week
  • Customer live deploy – 3 hours

 

As you can see, even with these rough estimates the time it takes to resolve the customer’s issue increases dramatically the longer it’s left. Not only that but the cost of time to the business of paying staff to run extra QA cycles or rounds of UAT spirals out of control. We haven’t even considered the final case where it goes to the bug queue to die and a completely different developer has to learn the feature and resolve the problem.

The graph is actually a very common shape*.

graph-2

I’m a big fan of practicality, no software is going to be perfect and it’s unrealistic to expect that you’re going to find and fix each and every issue before shipping to a customer.

However, hopefully this has made you stop and think and has provided a strong argument for making that extra 30 minutes to resolve the issue before it causes a real headache for the business!

*Thanks to fooplot.com for the graphing software.

When and Where to Automate Testing

A year ago I undertook an interesting piece of R&D to write Selenium tests for our main UI. I watched the pluralsight course, learned the difference between WebDriver and the IDE, and started building my Page Object Model. My simple test took the best part of three weeks to build and executed in around half the time a decent QA would take if you gave them a double shot of espresso.

I patted myself on the back, demonstrated the work to our Product Owner, and then advised that we should shelve the work because I wasn’t confident to hand over the mind boggling complexities of waits, framesets, and inherited view models into general practice.

Over time I lost confidence in the project. If it had taken me days to generate even the simplest of tests how would a junior developer fair when asked to automate complex financial scenarios. Quietly I put the idea to the back of my mind and concentrated on other more pressing matters.

That was until last week. We’ve been doing some work on our BACS integration and as part of the regression test I’d enlisted the help of one of our QAs to mock each response code and import it into the system. The process was tedious, repetitive, and I hated myself when I had to tell him he’d missed a vital check off each scenario.

As I was speaking to him cogs in my head began to turn. I’d gone off the idea of large scale test automation because of the complexity of our UI but the BACS processing system doesn’t have a UI. I could knock something together in a couple of hours which would create customers, mock BACS files, and schedule our JobServer. Even more powerful, if I used a technology like SpecFlow our QA could write the tests he wanted, I could automate them, and we’d be able to iterate over every scenario within a couple of minutes. Even more exciting was the idea we could send the feature files off to our Product Owner and banking partner and ask them to verify the behaviour was correct.

Later that week, after we’d proven our project to be a success and found a handful of low priority bugs to be corrected I started to wonder why this automation project had delivered such value where the previous UI automation had failed. I decided this was because:

  • The BACS process was an automated mechanism already so the test automation steps were simpler
  • The UI had been designed for human use, it wasn’t a chore to run through but it was complex for Selenium to navigate the controls
  • Mocking and importing BACS files was repetitive, slow, and tedious

The project turned out to be a huge success, we’re already planning on how we can expand the solution to cover other payment integrations such as SEPA.

The next time you’re considering whether or not you should automate an area of testing consider the nature of the tests. Do they use complex UIs? Do you have to repeat very similar tests over and over again? Do they currently take up a large portion of your testing cycles?

Try to automate the hidden process which run over and over and you wished you could test every time if you had the time!

Why you MUST run your Unit Tests as part of your Build Process

I was browsing StackOverflow the other day (as many geeks are known to do from time to time) and read an answer which describes writing Unit Tests as being like going to the gym. Now, I’m not sure about the final few sentences about finding another job but I do really like the analogy. It’s hard when you start out but the more you do the easier it gets and the more value it adds.

Now, like going to the gym there are couple of fundamentals which if forgotten can undermine everything. Unit Tests don’t require proper warmups or cool downs but they absolutely, definitely, must be run on the build server for their real value to be exploited.

Let me explain why.

Developers are often hackers by nature, we install different things and experiment. That’s what makes us good at our job, it’s also what makes every developer’s computer slightly different so software which runs on one machine could behave very differently on another.

What’s worse, if your developer is in a rush or having a bad day they may completely ‘forget’ to execute the tests before committing code. Now, instead of vigorously tested and proven code being added to source control you’ve got a risk.

Tools like NCrunch help reduce this but the only absolutely foolproof way to ensure that all your tests are executed and passed before deployment is to get your server to run them. Missing out this vital link means you’re opening your build and deployment process up to human error, something you should be very nervous of doing.

Almost all build servers have the capability of running tests so dig out the manual and make sure that when the tests fail, the build fails!

The Phoenix Project Book Review

I recently read The Phoenix Project, the definitive and often referenced DevOps Business Novel by Gene Kim, Kevin Behr, and George Spafford.

I have to admit I’ve been putting off reading this book, perhaps I had some misgivings about replacing Scrum techniques with the new craze of DevOps. Having decided that I was going to invest my time I was tested once again when I realised that Bill, the book’s main protagonist is an infrastructure guy… I’m a Development Manager, I want to know what DevOps can do for do for me not what I can do for the other guys!

That however was the last time I paused before devouring the remaining 370 pages.

Bill’s first few days after his promotion are littered with disasters. He has a project to deliver, infrastructure problems to resolve, and political ambushes to endure. In other words, he’s in the situation we’ve all experienced when teams aren’t communicating and colleagues are so stacked up with their own projects and priorities that they simply can’t assist each other.

With the timely arrival of Eric, the new potential board member Bill slowly equips himself with the tools he needs to resolve his infrastructure headaches and get the Phoenix Project back on track. Eric, acting as a kind of DevOps Yoda helps Bill gain an understanding of the four different types of work, the Theory of Constraints, Systems Thinking and see the value of Automated Deployments.

There’s a nice appendix at the end which helps consolidate the ideas and explains some of the ideas away from the story. There are also some mind blowing statistics about how some of the big web companies such as Amazon, Google, and Netflix.

So would I recommend this book to others? Absolutely! In fact I have, repeatedly to friends and colleagues! Whether you’re a developer, infrastructure guru or IT manager this book will introduce you to several new ways of thinking about workload management and IT processes.