My team at GDS – along with most of the rest of GDS – uses agile ways of working. What’s a bit different about our team is that a lot of what we do could be called ‘policy’. We are the only ‘policy’ team in government I have seen work this way.
I put ‘policy’ in inverted commas because I don’t think the word carries much meaning when it’s used in a general sense about the work of civil servants. When people say ‘policy’, I think they are talking about a bundle of things to do which include evidence, judgements, activities and products, which often have a false distinction from ‘delivery’, but that’s a story for another day.
I was chatting with Mike Dixon of Citizens Advice recently about how this is different to how other teams in organisations that do policy and research operate. Organisations like charities, think tanks, and research agencies, as well as government. He explained that many policy and research teams have moved to a matrix structure where teams form around projects rather than line reporting, and waterfall project management is the norm. In other words, the team structures have been made more flexible, but the project management methods haven’t evolved.
This means that policy and research products are specified upfront, then created over a period of months to match the specification, then delivered. By the time they are delivered, the specification might be out of date or significant changes might have happened within the policy topic, but the likelihood of such change is not accommodated within the project. This mirrors the shortcomings of using waterfall project management to deliver IT and digital services. Technology change makes the service outdated by the time it is procured and delivered, and the final service cannot easily be improved based on users’ feedback.
My experience has been that using agile tools helps circumvent this in policy and research environments. For example, my team is doing a project with the Helen Hamlyn Centre, part of the Royal College of Art, to research ways to deliver assisted digital support for digital by default services. Collaboratively with the design researchers, we have organised this project into two-week sprints. At the end of each sprint, there will be a new a set of concepts about how assisted digital can be delivered. The concepts will have gone through initial testing with potential users (or research participants or respondents as you might also call them). The best ideas will be developed further in the next sprint. And so on and so on, until after three months we will end up with a set of design concepts that have been tested with users and refined based on their feedback. The insight into user needs and preferences is gathered throughout the project, in dialogue about the concepts.
There is no final deliverable or big reveal at the end of the project. The outputs are the the concepts, the knowledge gathered in testing/research, and the materials used along the way. We at GDS get to see what’s being developed at the end of every two-week sprint, and the findings can immediately inform our work. Helen Hamlyn colleagues get immediate feedback from us and our views of how their ideas fit into the wider programme, and this informs the next stage of the research.
I’ve also found agile techniques to be useful for juggling priorities, allocating work within the team, and sharing limited resource across projects. Even better, the team is increasingly self-managing on matters like prioritisation – another side benefit has been the team getting closer and working more as a unit. After all, the unit of delivery is the team.
I will say more about that – and what using agile means for how we work together day to day – in other posts. I’ve also been mulling over policy and research projects from the past few years where I feel agile would have helped us avoid some scrapes. I’ll elaborate on that once I’ve found a way to talk about it that protects the innocent.
To be continued…