Big Data in everyday project governance (Spectator Journey Planner for London 2012)

This month I attended the close down meeting for the London 2012 Spectator Journey planner (SJP or more affectionately known as Sarah Jessica Parker) and we determined what we had learnt.

The learnings included the comical, like not including my personal handset on the mobile testing list, which resulted in some serious retesting at the eleventh hour. As well the hindsight moment of knowing that we planned well for Olympic scale but had to define processes respectively for reducing capacity. One for all cloud architects everywhere.

I am a big advocate for defining interfaces including full UI designs upfront. Our DFD was very comprehensive as it took guidance from me as well as ODA, TFL, DfT and the supplier ATOS. We all agreed that this contributed to the success of the project. Which considering the 50 point review I submitted at 11pm before leaving on a disastrous ski holiday I am super delighted it was useful.

Data in this project was big. The ODA worked hard to obtain timetable and scheduling information from every relevant travel organisation on main land Britain. This covered train time table published a year in advance as well as bus and river boat information. Ever changing road information for cars and bikes were continuously added to the systems.   LOCOG supplied scheduling information, venue information and top line ticket sales information. To make it more complicated a section of the team called Travel Demand Management at the ODA were looking at the routing information and determining changes to the actual transport network as well as to the routes suggested to the public in the software.

In short there was a lot of data and it was complicated. The project was successful because we could all pull together resources at short notice and because the budgets and the profile of working on London 2012 warranted it. Having said there are three things I have learnt about working with large complicated data sets that I want to transfer into all my project deliveries in the future

1. Defining Data Requirements

In the days when we “normalised” our databases, data was our starting point. Now, where we often start with UI and a strategy outline, data sometimes gets missed out. On the Mascot project we outlined a two page document on what Google Analytics will provide.  On the SJP project, time was really well spent on defining how data would be supplied both in the lead up to the Games but also when in full flow and an emergency change was required.

I have put this to the test already.  Last week when I was reviewing someones business plan using UML I did a little check. Sure enough the person who entered the information was missing from the plans. As was the person consuming the reports.

2. Sub Project Manage your data

The SJP was a big project both in budget and people employed. Due to the data’s importance it was given a team dedicated to its management. I have seen this work really well in retailers such as What made the SJP unique was that this team was part of the project delivery. The data team followed the same processes and testing that was mandated for the software delivery team. Both PM’s appeared at the governance meetings, both deliveries were given the same weight and importance. If budgets and opportunity allow I can see this working really well on a diverse range of projects.

3. Have a data spokesperson

No doubt this one is influenced by working in a communication department. However get yourself someone who can clearly explain the detail of the data to anyone. Especially if its complicated or impacts your customer. On the SJP project, real data caused concerns about how real journey’s might be conducted. Speculative data from journalists on travel melt down made the conversation more complicated.  A lot of good decisions where made based on the project analysis but by then discussion on whether the data or the software needed tweaking had clouded all conversations. Having a data authority who clearly communicates across any stakeholder goes a very long way to ensure projects of this nature are not derailed.

In the future I am going to treat the data elements of a project like software delivery which will give me confidence that the data will get delivered and actually be useful.

Saturday Word Cloud 12th

One Reply to “Big Data in everyday project governance (Spectator Journey Planner for London 2012)”

  1. I think a data spokesperson is an excellent idea. I think the size of the project will determine whether this is a dedicated resource or more often is part of someones role. But I think there is always a benefit in having team members that understand the data from an end user prospect and can act as an interface to the rest of the technical team.

    I think 2 other powerful tools are data dictionaries and sample datasets. Data dictionaries have been around as a concept for a long long time. But wherever possible they should be generated from the schema and the comments should be embedded in the source code itself. As this is the best way to ensure the documentation remains accurate as the schema evolves.

    Also as the size of data sets continues to increase I think providing sample data sets will become more important. These should reflect the common elements of the data as well as some of the outliers. As with data based projects its often these outliers that cause a lot of the bugs due to common problems such as field lengths or character encoding issues.

Comments are closed.