Musings from the real world

Written By:
Published:
Content Copyright © 2006 Bloor. All Rights Reserved.

This is the first of an occasional series (of as yet indeterminate duration) which will build on what I learn as an analyst combined with what I learn as a consultant.

One of the challenges that all analysts face is to maintain an understanding of the differences between ‘vendor claim’ and end-user achievement. This is not to say that all of the faults lie with the vendors, far from it; it seems to me that end-user organisations all too often go out of their way to compromise what they, and all around them, are attempting to achieve. When I go to vendors’ offices and, indeed, to the best run IT shops, I see some things being achieved, some things running as they should, but mostly I see things that leave me aghast.

Just before Christmas my attention was drawn to an article about a study undertaken by the Canadian Government into why more of their projects appeared to fail than to succeed. Now many of you may say that this failure is just because they are ‘government’ projects. However, after spending nearly 30 years in the industry, my experience suggests that you should remove your pink-tinted glasses and be realistic about the true value of some of the projects that are undertaken in the commercial sector. Whilst I have been involved in many projects which were successful, by any means of measurement, I have also been involved (and have observed with alarming regularity) projects which could only be described as successful by someone being extremely economical with the truth. For instance, one project I worked on was a year late, but was deemed a success because everyone was expecting it to be two years late!

But to return to the Canadian government; what they discovered was that projects tend to fail because they are initiated in such a way as to all-but-ensure that they fail. A list which they compiled and which I have augmented with some ideas of my own looks like this:

  • Projects fail to clearly define their goal and the success criteria.
  • They start without any clear definition of who is responsible for what.
  • At the outset all flexibility is removed by making it clear that the scope, the end date, the budget and the resources to be employed are all but fixed.
  • The business justification is weak.
  • No real attempts have been made to ensure technical feasibility.
  • The correct levels of management buy-in, sponsorship and commitment are missing.
  • There is a lack of the correct level of senior management input to the project initiation; but once the project is underway, management are quick to state their expectations and to voice disquiet at any deviation from those previously unstated constraints.
  • Estimates are made for effort without any recourse to technically competent staff.
  • Management compound all of this with calls for a can-do attitude.

I would contend that we have all been there, seen this, and got the t-shirt, on more occasions than we would like to admit. I do not know a person who works in IT who is not convinced that an ex-employee writes Dilbert.

Over the coming weeks I want to further explore these problems and look at some of the things that the industry needs to do to address them.