Project sponsors: Saving face or saving costs? – Nik Gebhard 0

Posted on 21, April 2011

in Category practitioner experience


In today’s fast paced technology world it is common, if not mandatory, for financial institutions to replace legacy systems in order to gain competitive advantage. In my experience business stakeholders will often have decided on the technology before bringing an analyst or consultant on board. A lack of analysis from the onset means uninformed decisions and ultimately reputational risk if the wrong choice is made.

Implementation projects that I have been involved in, have typically deferred the engaging of business analysts until project slippage has arisen. I’m not entirely sure whether this is a scapegoat tactic or a sincere attempt at project recovery.

Either way, a challenge arises to convince business stakeholders that an analysis of the original requirements and a “system best-fit” is required. This proposal is often met with phrases like “budget spent”, “budget available” and that going back to the proverbial grindstone is too expensive and a waste of time.

Some form of requirements analysis inevitably follows. This is done to gain traction and better understand the ultimate objective and timelines.

You get what you pay for
It is at this point that a number of requirement gaps surface and the back-and-forth of change request documentation begins. Soon enough, the project team spends more time analysing and writing up change request proposals than implementing an off-the-shelf application. Along with bespoke requirements come defects and the inescapable timeline delays. Not to mention the increased vendor support and maintenance costs for bespoke code.

I find that the greatest misconception with system implementations is the forecast completion timelines. Vendors seem to be inordinately skilled at pulling the wool over business’ eyes by instilling some form of delusion that their system will be ready to suit the business need quicker and to a greater extent than any other system offered on the market. Over-promise and under-deliver is key to landing a contract. After all, it is a dog-eat-dog world.

How much money fixes a bad decision?
It is not long before conversations around technology choice become more apparent in day-to-day project conversation. A silent game of “Who is responsible?” ensues. From a reputational perspective, it could be lethal admitting that the executive team has made a poor decision and invested copious amounts of money in it. The question is though: “Is that more dangerous than continuing to throw money at it until it is no longer a bad decision?”

Personally, I’m for transparency and openness, but have now realised that this does not seem to be the favoured choice. I’ve recognised that businesses will more readily press on with extensive customisation, rather than go back to the drawing board. This approach means that change requests are raised thick and fast in return for large sums of money. Suddenly the cost-benefit evaluation that was pivotal when going through the system selection process is no longer applicable.

At what point does the reputational cost of admitting to having made a poor decision no longer outweigh the cost of customisation? Can monetary value be applied to reputation?

Should we point fingers, or should we get on with it?
I spent a little over a year on a project implementing an off-the-shelf system that was originally deemed best-fit by business stakeholders. The words “wrong choice” were blasphemy. It very quickly became apparent that customisation was the only way the business was willing to go. The instruction from the project board was to press on. Change requests were raised almost daily. Many months of missed development deadlines and revised budgets went by before the project was eventually completed.

Almost sixteen months have now passed since go-live and the executive stakeholders and decision makers are no longer in the same department.  What’s more is that the company is evaluating a new system to replace what was implemented. Something they could have been doing almost two years earlier. This is a particularly poor and costly consequence of an even poorer decision.

“Sticking to your guns” can work out
On the other hand, I have seen and been involved in a number of projects where the same “stick to our guns” attitude has been applied and organisations have shown significant growth. I have to ask myself: “Does this mean that this was a less poor choice in technology, or were other factors at play?” I do know that many of these projects also entailed extensive customisation – customisation that has since paid for itself.

Obviously the ideal scenario is for informed technology decisions to be made off the back of extensive analysis. But, what if this fundamental step has been missed? At what point do you make a call for good decision versus bad decision? At what point do you no longer insist that the price tag on reputation is higher than the cost of customisation? Is it a calculated risk or blind luck?

This article originally appeared on Bridging the Gap on 21 April 2011. Click here to view the original article.

0 Comments