Recently, Gartner noted a study that purported to show that "between 70% and 80% of BI projects fail." Ordinarily, I would treat this kind of statement, from Gartner or anyone else, as the equivalent of "give me your money or you're going to be in really, really bad trouble." However, the author of the study does have recent CIO experience with business intelligence (BI). Moreover, some of the problems he cited as causing BI project failure are well-known as causes of failures in other areas of IT — in particular, bad communication between IT and corporate. So there is good reason to think that a significant proportion of today's business intelligence projects do indeed fail, by any reasonable standard.
On the other hand, 70% to 80% is an unusually high proportion of projects to fail. For one thing, business intelligence is a well-established discipline in large organizations. Moreover, much of existing BI concerns itself with enterprise reporting — canned querying applied at certain times of the year. Just how plausible is it that an extension of enterprise reporting fails 70-80% of the time, at least 15 years after BI started doing enterprise reporting in the first place? Something seems odd about this figure.
So how significant to the average user, really, is "BI project failure"? And if it is important to a particular user, is the fix really to improve CIO-CEO communications by teaching IT to talk in corporate jargon?
We Have Been Here Before
In fact, BI "projects" bear a surprising resemblance to the software development projects that enterprises have been doing for the last 45 years. Often, they involve writing new software. Certainly, they require writing new queries and using existing software in new ways that require design and planning beforehand. If the developer's expertise isn't required, then the "data miner," who knows how to translate requests for information into querying commands, is required.
Given this resemblance, we would expect the failure rate of BI projects to have some relation to the failure rate of software development projects. But when we look at the history of attempts to find out development failure rates, we discover, embarrassingly, that no one has ever really succeeded in getting a meaningful figure for failure rates.
That isn't to say that three decades of researchers haven't tried. Efforts have ranged from attempts to measure lines of working code written, to abstractions of functions coded called "function points," to comparative studies, to "bake-offs" between competing development tools. As it turns out, the only meaningful long-term measure of "failure rate" is ... what the enterprise says it is. That is, one firm that started tracking user reports of success vs. failure in the early 1990s found that, consistently, 40-60% of development projects were reported as "late," "never finished," or "didn't do what was asked."
Or at least that was true until five years ago. Because five years ago, IT, offshore and vendor developers began seriously committing to a process called "agile development." And as that happened, astonishingly, reported failure rates shrank to 20%, to 15%, and in cases where agile was used for everything, to close to 0%.
Full disclosure here: As a programmer long ago in the infancy of development, I have been a proponent of agile development ideas well before they surfaced in the Agile Development Manifesto 10 years ago. I am, indeed, proudly prejudiced about the positive effects of both agile development processes and the ideas of business agility that logically follow. But I must point out that research that I have done since makes my point. An extensive survey two years ago confirmed that by every metric I could find, IT and business, agile development brought 15-45% improvement (a recent IBM study claims 60% improvement). And by the same metrics, all other popular processes, tools, etc., either had no or minimal effects.
So the question of whether 70-80% of BI projects fail turns into: if the BI project isn't agile, why wouldn't it have high failure rates? And if it is agile, how is it credible that failure rates are that high?
Agile Business Intelligence Would Be A Good Idea
The story goes that a reporter once asked Gandhi (who as a lawyer had been in England and knew what he was talking about) what he thought of Western civilization. "I think," Gandhi replied, "that it would be a good idea."
By the same token, to ask whether BI projects are agile given that reported failure rates are 70-80% is a question with its own answer. Of course most BI projects do not use truly agile processes. There is simply no way that agile BI projects could achieve such a failure rate. Rather, this failure rate is an indication that BI is lagging far behind in its ability to achieve an agile process — and therefore, interestingly, in its ability to increase business agility. What do I think of agile BI? I think it would be a good idea.
Certainly I am not the only one to note the failure of projects that enterprises mislabel as agile. Jill Dyche, a long-time consultant, recently caused a stir by decrying projects that try to do BI "on the cheap," by applying agile programming teams uncritically, without consideration for the careful planning and communication with the business user that needs to go on (see What Agile Business Intelligence Really Means). However, for those who don't know, frequent communication between the developer and the business user is a key part of a truly agile development process. In effect, Jill, the Gartner person, and I in this particular case are saying much the same thing: there needs to be effective communication between the business with its needs and the computing arm that attempts to respond to those needs.