Agile policy and research

My team at GDS – along with most of the rest of GDS – uses agile ways of working. What’s a bit different about our team is that a lot of what we do could be called ‘policy’. We are the only ‘policy’ team in government I have seen work this way.

I put ‘policy’ in inverted commas because I don’t think the word carries much meaning when it’s used in a general sense about the work of civil servants. When people say ‘policy’, I think they are talking about a bundle of things to do which include evidence, judgements, activities and products, which often have a false distinction from ‘delivery’, but that’s a story for another day.

I was chatting with Mike Dixon of Citizens Advice recently about how this is different to how other teams in organisations that do policy and research operate. Organisations like charities, think tanks, and research agencies, as well as government. He explained that many policy and research teams have moved to a matrix structure where teams form around projects rather than line reporting, and waterfall project management is the norm. In other words, the team structures have been made more flexible, but the project management methods haven’t evolved.

This means that policy and research products are specified upfront, then created over a period of months to match the specification, then delivered. By the time they are delivered, the specification might be out of date or significant changes might have happened within the policy topic, but the likelihood of such change is not accommodated within the project. This mirrors the shortcomings of using waterfall project management to deliver IT and digital services. Technology change makes the service outdated by the time it is procured and delivered, and the final service cannot easily be improved based on users’ feedback.

My experience has been that using agile tools helps circumvent this in policy and research environments. For example, my team is doing a project with the Helen Hamlyn Centre, part of the Royal College of Art, to research ways to deliver assisted digital support for digital by default services. Collaboratively with the design researchers, we have organised this project into two-week sprints. At the end of each sprint, there will be a new a set of concepts about how assisted digital can be delivered. The concepts will have gone through initial testing with potential users (or research participants or respondents as you might also call them). The best ideas will be developed further in the next sprint. And so on and so on, until after three months we will end up with a set of design concepts that have been tested with users and refined based on their feedback. The insight into user needs and preferences is gathered throughout the project, in dialogue about the concepts.

There is no final deliverable or big reveal at the end of the project. The outputs are the the concepts, the knowledge gathered in testing/research, and the materials used along the way. We at GDS get to see what’s being developed at the end of every two-week sprint, and the findings can immediately inform our work. Helen Hamlyn colleagues get immediate feedback from us and our views of how their ideas fit into the wider programme, and this informs the next stage of the research.

I’ve also found agile techniques to be useful for juggling priorities, allocating work within the team, and sharing limited resource across projects. Even better, the team is increasingly self-managing on matters like prioritisation – another side benefit has been the team getting closer and working more as a unit. After all, the unit of delivery is the team.

I will say more about that – and what using agile means for how we work together day to day – in other posts. I’ve also been mulling over policy and research projects from the past few years where I feel agile would have helped us avoid some scrapes. I’ll elaborate on that once I’ve found  a way to talk about it that protects the innocent.

To be continued…

What about people who aren’t online?

My super clever colleague Reema Mehta wrote a nice summary of how digital by default Government will meet the needs of users who aren’t online.

It’s one of the questions we get asked most often at GDS. I’m glad we get asked it often because it’s important and for me personally it’s something that makes working in government interesting. We don’t get to choose to only provide services to people who are able to interact with us digitally, we have to provide services to everyone.

An even shorter summary is

  • assisted digital – getting digital services to people who are offline
  • digital inclusion – getting people online so they can do loads of digital stuff

Though I’m glad we get asked this question, I’m not at all glad when I hear people say that government can’t go digital by default without 100% of the population being online. In fact it makes me cross because it is not true.

Digital by default means ‘digital services that are so straightforward and convenient that all those who can use them will choose to do so, whilst those who can’t are not excluded’. It does not mean ‘everyone has to use digital services independently so they have to all get online, else they won’t be able to access services anymore’.

There is a role government can and will play – along with citizens, the voluntary and community sector, and the private sector – to support and encourage people and organisations to develop digital skills. But going digital by default is not contingent on this.

The moral of this story is… you could have either one of assisted digital and digital inclusion without the other, but I’m glad we have both.