For some time now, I’ve been on the side of independently evaluating and improving software. This is a rather unique position to be in and a very interesting one, indeed. For years, I’d been on the software development side as a pure-play software vendor. I’ve also been on the boards of companies where we both used and implemented software. The differences here cannot be underestimated, nor the impact of these differences. Let me try to give you a few insights that hopefully make you think:
Quote from a software vendor: “In the demo session, we’re always asked to show everything we have and more. They all say they want to stick to the standard, but in the implementation, they always only use 60% of the standard and ask 40% extra things that were never clear in the first place. Then in actual usage, they only use 10% of what had been configured.”
Quote from a software buyer: “In the demos, they always say they can do everything and blow us away with fancy dashboards and false promises. Then in the implementation, it turns out that certain things we had clearly described before aren’t possible, unless at a high cost. And then, when it comes time to use it, our users are only so-so satisfied.”
Whatever side you’re on, I’m sure you recognize your own position. You may notice both are saying the very same thing, but from two very different perspectives. In some instances, there is, of course, the sheer element of plain lying in the sales phase, but believe me, that’s much rarer than buyers would think. It happens, but most vendors hate it, as it’s a heavy boomerang. And believe me, sales people who always overpromise don’t typically last long. Deliberate overpromising is an issue, but certainly not the key problem.
So what is the problem then? For years, I’ve struggled to articulate it. Now, being on the independent side, and heading up a team of brilliant people capable of measuring, analyzing and monitoring software, I think I finally have an answer. The reason for this constant battle is the basic lack of understanding of how software works and how complex it is.
Some readers may argue that it all comes down to better project management. Managing client expectations and managing software projects is indeed of utter importance. It, however, is not the cause of the problem; at best, it’s the cure of the problem. The problem is that people underestimate the complexity of software. Vendors are too optimistic about their own capabilities; buyers confuse a required new capability with “just adding another feature, which can’t be that difficult.” Well, it often is very difficult.
To get out of this problem, or avoid it altogether: measure, and measure continuously, so that you understand what you’re asking, and what you’re doing.
Buyers: look inside the software, don’t just look at the nice demo at the end showing you the feature you asked for.
Vendors: have buyers look at what they’re asking for, measure what you customize, and demonstrate value.
For all: transparency rules. Make sure you monitor what you’re doing. It will force everybody to stay honest and stay focused on what you really want: great software doing what it needs to do, and satisfied users.
One final remark: in the case any politician is reading this, the above is also true for the relationship between you, as a politician who acts as a buyer of software, and the governmental organizations who have to implement new regulations. Even more so, the gap may even be a lot bigger than in commercial organizations, because of the inherently big distance between parliament and the actual programmer. Don’t be surprised if things go wrong when you don’t understand software at a fundamental level. It’s not possible to build something or have something built when you don’t understand it.
Well, it is possible, but at a considerable price.