Skip to main content

Managing Metrics; Lies, Damn Lies and Statistics

Here’s a common scenario for IAG, the company I work for: client wants to make significant improvement in their business analyst organization – AND – they need to demonstrate the performance improvement made. It happens all the time, and there are many managers that know the pain of getting locked into managing the wrong set of key performance indicators.

Here are the ways IAG approaches this need:

  1. Benchmarking organizations’ requirements maturity: the best possible scenario for setting up a system of metrics is to benchmark an organization’s requirement maturity at a specific point in time – then repeat at periodic intervals. Rather than rely on a single point metric as a judge of performance, it is best to diagnose people, techniques, process, technology, organization, and documentation standards, and from this establish a baseline of current performance with an action plan for the future. One year from now, you should then snapshot the organization to validate that performance improvement has been made and compare with industry data to show the value of these activities.

    Here’s the issue with most organizations: They have a blind spot that is radically reducing their overall performance as an organization. It could be deliverables, skills, technology, who knows, but unless this is fixed, even though the organization thinks it’s making huge improvements, they may actually be delivering in a less than stellar fashion. When you assess maturity, it uncovers these issues, and enhances the ability of the organization to move forward quickly.

  2. Scorecarding: I love it when a client goes the scorecarding route with IAG for building their metrics program. It’s a robust approach to building a program of metrics that harmonizes these back to broader organizational and strategic objectives. Some people have the idea that a scorecard will be a perfect nirvana – pfffft – not going to happen. It’s a discipline! The idea of the disciple is – first and foremost – to get into the discipline and get value out of it. So it is better to get a scorecard in place in two weeks and concentrate on teaching people the discipline of using it, than to strive for the ‘perfect metric’ and wind up not measuring anything. When business people see great value out of an activity – especially metrics-based management – then the quality of the metrics that get managed also naturally evolve and improve over time. The problem is, for most organizations, getting the process started.
  3. Side-by-side-paired-project-execution. OK! in English; take two projects of roughly equal size/complexity. On one, do whatever that project team thinks is best (the traditional approach). On the other, follow the new disciplined approach. Repeat three times. There is absolutely nothing like having a project pushed through in half the time and stakeholders singing the praises of the new process. It’s great I theory but companies often have difficulties doing this internally because there is not enough difference between people(Skills), process, techniques, technology, organization, and deliverables (collectively referred to as ‘requirements capabilities’) used in one project versus the other to make a substantial difference. This is where teaming with an outside organization can show what performance change is possible – IF – the outsider can demonstrate significantly more maturity in requirements capabilities. The point being: if you want to run two projects at roughly the same time, make sure there is sufficient difference between the two teams across all requirements capabilities; don’t just change the deliverable and expect a significant performance improvement.

    The other alternative is to compare projects before and after investing heavily in resources and process improvement over a period of months; i.e., have good metrics on current projects in the anticipation of pairing future projects with these and looking at performance differences. You can also have this kind of research process externally audited like we did years ago in building stats on our own methodology. I’ve also done this kind of thing for clients, and you can get fairly strong data showing new versus old – but you do need to appreciate it is a fairly large project to set up the metrics, conduct the measures multiple times over the course of the project, compile results, etc. You can get anecdotal results fairly quickly, but it takes a rigorous method, and the disciple to execute on the research method over months to get strong statistically defensible results.

  4. Finally, you can do look-back reviews. Set up a methodology for determining all the dimensions of requirements maturity on a particular project, then investigate the projects, looking at the documentation, interviewing stakeholders, PMs and BAs. Compare the findings on maturity with the project outcome in terms of on-time and on-budget performance to get your metrics for past projects – and set the baseline for future performance. This is not my favorite approach simply because it is backward looking – presumably on poor performance, rather than forward looking on performance improvement. I’ve built analysis this way – it yields an almost perfect line (better requirements maturity = better project performance) but because it is not forward looking with positive accomplishment being represented in the data, it is generally not as compelling to senior management.

Regardless of the method you use for measuring and representing performance, remember there are two types of statistics:

  • Information that is interesting (I’m fat)
  • Information that causes us to change behavior (I’m only expected to live until 55 given current health

The same information might be represented in each of these two sets of statistics but only the second causes me to change behavior (assuming the source was credible). I call this catelizing your benefits. Expressing data in terms that communicate clearly, where the implications of the finding are well understood, and people see that action is needed based on the finding.

Whatever method you chose – catalyze your benefits!

Don’t forget to leave your comments below


Keith Ellis is Vice-President, Marketing at IAG Consulting (www.iag.biz) where he leads the marketing and strategic alliances efforts of this global leader in business requirements discovery and management. Keith is a veteran of the technology services business and founder of the business analysis company Digital Mosaic which was sold to IAG in 2007. Keith’s former lives have included leading the consulting and services research efforts of the technology trend watcher International Data Corporation in Canada, and the marketing strategy of the global outsourcer CGI in the financial services sector. Keith is the author of IAG’s Business Analysis Benchmark – the definitive source of data on the impact of business requirements on technology projects.