What Agile Business Intelligence Really Means

Thursday Apr 7th 2011 by Wayne Kernochan
Share:

Agile business intelligence isn't a one-time upgrade, but a continual process.

One of the written works I most admire, Bruce Catton's history of the Civil War ("Never Call Retreat"), says of the prelude to Missionary Ridge that Grant and his commanders "had inspected the ground beforehand, and, as sometimes ironically befalls the diligent, had unanimously fallen into error" — by concluding that a flank attack could succeed.

Several months back, Jill Dyche, a long-time business intelligence consultant with a clear indication in her blog of diligence, savvy and experience in data warehousing, launched an attack on present-day enterprises attempting to do agile business intelligence (BI) "on the cheap," claiming that they were not performing the drudge work of developing data models and improving data quality and therefore were less likely to achieve agile business intelligence. In the process of critiquing these users, Dyche also seemed to imply that the methods she laid out were sufficient to achieve significant improvements in business agility.

I believe that Ms. Dyche is suffering the misfortune of the diligent: assuming that her experience and efforts in BI and familiarity with agile development should lead to improved business agility from BI. It is no knock on her — quite the contrary — to say that I disagree.

Far more important, however, is my sense that we have not yet done the hard work of asking just how so-called "agile" BI can lead to increased business agility, and to the benefits that should attend better business agility. As a result, we are in danger of devaluing agility itself by showering the term "agility" on BI approaches that will, in the end, inevitably disappoint us.

 

Why BI and analytics in themselves are not agile

We understand the potential of better BI: a deeper understanding of business processes, and of customers. We understand the potential of better analytics: better anticipation of the consequences of business strategies and marketing campaigns. We understand the potential of faster BI and faster analytics: more rapid reaction to emergencies, and competitive advantage over slower competitors. What we do not understand is that faster is not the same thing as more agile.

The key reason that faster is not always better is captured in the saying: going faster gets you to the wrong place quicker. More fundamentally, improving the speed of BI deployment and upgrade ignores what marketing theory calls marketing myopia — the fact that the business is, or should be, in a different market. Collecting more and better information about existing customers, and improving the importance of reacting to them in the next release, as well as speeding the improvements those customers ask for, yields apparently excellent results in the short term — and embeds the wrong market ever more firmly into the business processes, so that switching out of that market becomes ever harder.

A glaring example of this in BI — one that it is amazing that more businesses do not note — is the entire housing bubble and derivatives-driven crash that has nearly led to a second Great Depression. The financial models that allowed leading-edge firms to identify seeming flaws in the market, to hedge, and to trade, literally in milliseconds, provided superb profits all along the line right up to the point where the bubble burst. The models themselves, and the processes of the investment banks that used this kind of BI, became ever more skewed towards managing mortgage-backed securities — with the result that despite the continuing danger of default, some continue to try again, using slight variations on the original scheme. The speed of deployment and upgrade of this BI software was by any standard leading-edge, as was the depth and quality of the actual information collected. But the inability to collect information that would indicate that something was wrong, and the inability to quickly adjust the business to a different market, should have indicated the truth: that the bank was becoming less agile, and more risky (in the sense of downward risk), and sooner or later it would pay.

 

What does real Agile BI look like?

One of the key insights from recent studies of agile software development is that increasing the focus on the agility of the process, and decreasing the focus on its costs, downward risks, and ROI actually — and permanently — can decrease costs, increase revenues, increase "upward risks" (which is good!), and decrease downward risks. So real agile BI should not only focus on improving the ability of the organization to change direction proactively as well as in response to changes in its environment, but also create a BI process that itself focuses on improving the ability to change direction within the process, in response not only to evolving user needs but also to changes in the environment.

These findings are seconded by a recent Sloan Management Review (Winter 2011, p. 21) survey of companies, most of whom were using analytics. The "top performers" in typical business metrics such as profit among these were different from the "bottom performers" primarily because they were using analytics to guide their actions in strategy development and product research and development, and because they were using insights from analytics not only periodically, to guide future strategies, but also on a daily basis, to steer day-to-day operations.

Let's be more concrete. Here's my model of an organization's process of ingesting and analyzing information, and its typical goals:

 

  1. Data entry — accuracy
  2. Data consolidation — consistency
  3. Data aggregation — scope
  4. Information targeting — fit
  5. Information delivery — timeliness
  6. Information analysis — analyzability

Now, clearly, the aim of this process should be to get the right data to the right person as fast as possible, with the right context and tools for deep analysis and maximum effectiveness of resulting actions. My survey two years ago suggested that each stage of the process was contributing to a result that fell far below this standard:

 

  • 20% of data has errors in it (accuracy)
  • 50% of data is inconsistent (consistency)
  • It typically takes 7 days to get data to the end user (timeliness)
  • It isn't possible to do a cross-database query on 70% of company data (scope)
  • 65% of the time, executives don't receive the data they need (fit)
  • 60% of the time, users can't do immediate online analysis of data they receive (analyzability)
  • 75% of new key information sources that surface on the Web are not passed on to users within the year (agility)

This last result goes directly to the problem with the lack of agility of today's business intelligence. Almost all of the data that would allow the business to understand how the business strategy and products should change going forward is not delivered in a timely fashion, if at all.

So how would we change the BI/analytics version of this process, and the attendant BI/analytics tools, to improve not only the agility of the process, but also the agility of the business as a whole? On the next page, I examine each stage of the process, and the process as a whole, to suggest some ways that they could become more agile by focusing on agility — based on what we have found already are effective ways of improving agility.

Wayne Kernochan of Infostructure Associates has been an IT industry analyst focused on infrastructure software for more than 20 years.

Improving Business Intelligence Agility

Data entry

Remember, the point of this exercise is to ask how we would change each stage of the information-handling process if we were to focus on improving its ability to change rapidly in response to unforeseen changes. In the case of data entry, the unforeseen changes are new types of data. Often, the signal of a new type of data is an increase in data entry errors, since the inputter is attempting to shoehorn new information into an old data-entry format. So the first rule of agile BI becomes:

There's more information in bad data than good data.

More concretely, agility theory would suggest that instead of chasing the elusive goal of zero data-entry quality defects, we focus on examining bad data for patterns via analytics, and using the insights to drive constant upgrades in the global metadata repository (e.g., via a master data management tool) to reflect new data struggling to be entered. Also, we should feed those insights to new BI-product development to allow user interfaces that semi-automatically decrease data entry errors by matching user data entry needs better.

This also means changes in metrics and incentives. Rather than punishing IT for slow "fixes" to problems driven by incorrect data, we should be rewarding IT for effective "near-real-time" upgrades of data-entry interfaces that prevent these problems in the first place.

 

Data consolidation

Data consolidation is about noting other copies of a datum (or other related records). It has long been noted that BI-type data is usually kept in "data archipelagoes," line-of-business or local data stores that do not adequately connect with a central data warehouse to allow consistency checks. What is much more rarely noted is that this situation is becoming worse, not better — suggesting that we are not only chasing a moving target, but one that we will never catch.

Here's where agility theory comes into play. Traditional IT calls for "discipline" — making each newly-discovered data source toe the line and feed into an ETL front-end to a more central data warehouse or cluster of data marts. Agility theory, on the contrary, emphasizes auto-discovery of each new data source, and automated upgrade of metadata repositories to automatically accommodate the new information.

ETL tools typically do not offer this type of auto-discovery; MDM and data visualization tools do. But remember: don't choose the tool for its quality achievements, but rather for the features that allow you to adapt quickly to new challenges.

 

Data aggregation

The distinction between data consolidation and data aggregation is a bit narrow but important: data consolidation is about making sure that the new datum is consistent with previous data (if the system says you're a teacher of student A and the new datum says you're not, it's inconsistent); data aggregation is about giving everyone across the organization who needs the information access to it. Practically speaking, that means querying across data stores — because, as noted above, we are getting further and further from an architecture in which all data is in a central data warehouse's store. And in order to query across data stores, you need a frequently upgraded "global metadata repository" to tell you how to combine data across data stores.

Agility theory flips the traditional approach to BI data aggregation on its side. In traditional BI, the aim is to funnel all possible data to the data warehouse, and then maximize performance of the data warehouse on well-defined, often-repeated queries, with occasional resort to data visualization in order to handle outlying cases. In real agile BI, the focus is on handling ever more new data types, accessed wherever they are, and proactively searching them out. Data virtualization tools, today, are the best way to achieve this — they often offer auto-discovery of new data sources and data types, and they optimize performance of queries across data stores, leaving each underlying database to optimize itself. Over the long term, this delivers "good enough" performance over a much broader set of data. So the second rule of agile BI becomes:

By focusing on seeking out new data rather than improving performance on existing data, we end up improving overall performance.

 

Information targeting

Note that we have suddenly moved from "data" to "information". This is because we have added enough context to the raw data (its connection to other data in the enterprise's global aggregated data store) that it can yield insights — it is potentially useful. Now we have to start the process of making sure it is really used effectively.

Arbitrarily, the first step is to figure out where the information should be sent — who can use it. In traditional BI, first the data is placed in the data warehouse, then it is sent to well-defined individuals — reports to CFOs and LOB executives, analyses hand-delivered by data miners to their business-side customers. The weight of tradition, expressed in customized report software as well as analytics customized for the planning/budgeting/forecasting cycle, hampers changing the targets of information delivery as more agile organizations change their processes.

In agile BI, information goes first to those whose agility has the greatest positive effect on the organization: those involved in new-strategy development and new-product development. In other words, information and alerts about today's products and customer/Web trends show up at the innovator's doorstep today, not at the end of the quarter when reports are generated.

The other key difference in agile BI is that routing (or availability) is not as pre-defined. The point of BI for the masses plus ad-hoc querying is that particular end users may find connections between data — including outside data like social media — that internally-focused, request-driven data miners don't. And that can often mean a slight reduction in security. Balancing security and the organization's "openness" to and from the outside is always a delicate task, but agility theory suggests that more openness can lead to less downside risk. This is because more openness can mean better knowledge of threats and more rapid adaptation to these threats. That leads to the next rule:

Increased focus on exchange of data with the outside rather than defensive security can decrease downside risks (and increase upside risks!).

 

Information delivery

The next step is to get the information to the targeted person. The big emphasis over the last few years has been decreasing the time between information aggregation and information delivery. However, the organization does not necessarily fare best with the minimum time between data input and information delivery. Studies have shown that without historical context, users can "overshoot" in adjusting to the latest data. Moreover, speeding up information delivery in BI is a bit like engaging in an arms race: if the competitor matches your speedup, or you leapfrog each other, no benefit from speedup may be apparent. And finally, benefits and costs are "asymmetrical" — If you take one day to carry out a process while your competitor takes 5 days, speeding up to half a day often yields little additional benefit; but if you take six days to carry out the process, speeding up to 3 days usually yields a much larger benefit.

One approach suggested by agility theory sounds a little strange: focus on speed of change of information delivery. In other words, instead of focusing on delivering the same information faster, focus on the ability to alter the deliverable information in one's data stores as rapidly as possible as requirements or processes change. Now follow the logic of that: each new datum alters the information that should be delivered to the user. So to get more agile information delivery, you eliminate as much of the information-handling process as possible for as much of the data as possible. To put it another way, you redesign the process so that as much as possible, data is converted to information and routed at the point of input, in real time.

That's the real reason that an event-processing architecture can be agile. It intercepts data being input into the organization and does pre-processing, including direct alerting. The most sophisticated tools add enough context to expand the amount of data that can be so treated. Meanwhile, deeper analyses wait their turn. And upgrading the event processing engine can be a quicker way to handle new data types flowing into the organization's data stores.

 

Information analysis

Discussions of the value of BI tend to assume that its largest benefit is in improving the decision-making of its users. Interestingly, agility theory suggests that this is not necessarily so. Greater benefits can be achieved if the information delivered improves a business process (e.g., makes book-to-bill or customer relationships more effective), or if the information improves new product development (NPD).

That, in turn, means that real agile BI should focus on analysis tools that make an operational process or NPD better, rather than on better dashboards for the CEO. It should also emphasize BI that detects and suggests adaptations to unexpected change, rather than analytics that improves the forecast of what will happen if everything goes as expected. This means more emphasis on ad-hoc and exploratory information analysis tools, and more "sensitivity analysis" that sees the effect of changes in assumptions or outcomes. One interesting idea is that of "dials" — e.g., sliders that allow us to vary revenues and costs, as ad expenditures or sales change from their forecast levels.

 

The Overall Business Intelligence Process

In traditional BI, reports are pretty invariant and analyses follow predictable patterns, so that "views" can accumulate over time, derived from a stable core. The business agility approach demands that we view BI as constantly changing, in response to the needs of an agile organization, or in order to support proactive NPD or business-process change. Thus, change in traditional BI is an evolution of previous insights — deeper, on larger data sets — while change in agile BI is embedded in the development process and tools — they are designed to change BI processes and solutions constantly.

An agile overall BI process, then, involves three new characteristics:

 

  1. It incorporates frequent input from the end user and from the environment;
  2. It "spirals in on" upgraded solutions, endlessly;
  3. It emphasizes integration with agile development and innovation.

The word "spiral" is often misunderstood. It conjures up images of developers spending extra time in design to change the solution template, then spending extra time in development to change the code created, etc. Actually, what is really going on in good agile development is iteration of rapid prototyping — design to the finished product in one automated step — combined with "middle in" programming. The developer starts with a set of building blocks and a quick way to generate user interfaces. At the start of each iteration, the end user and developer improve and update the user interface. The developer then generates a prototype that mimics a finished product using "placeholders," and builds "bridges" between the basic building blocks and the user interface. Some of the work of building "bridges" is wasted, but there is far more in the way of time savings from immediate reality-testing of prototypes and from avoiding the design effort of trying to cover every possible case. Updates are more frequent, and represent points where the product is "close enough" to user needs at a particular point in time.

Let's get more concrete, with an example. Wearers begin knocking your latest running shoe on Facebook, sending photos of dislodged heels. Your BI bot on the Internet, or one of your community, notes this within a day and passes back the product info, the nature of the complaint, and the photos to an event-stream processor at your data center. This notes the odd combination of data, relates it to a similar combination at your help desk, and updates the help-desk interface to support and watch for further complaints. It then passes the data directly to a product developer, as well as alerting media communications and marketing, and consolidates and aggregates the data to identify further insights from the complaint. While media communications deals with immediate fallout, the product developer figures out that for a certain type of runner, the resulting walking gait places unusual stress on this particular type of heel. Rather than attempting to fix the heel only for this particular case, the developer queries past data on shoes for this type of runner and finds, counter-intuitively, that moving the heel forward instead of extending it back or sideways works better. Quick prototyping reveals that the fix not only avoids dislodged heels for this type of runner, but also encourages most wearers to place more weight on the balls of their feet, giving them a feeling of greater energy. Marketing sells the first prototype to runners as "energy and fashion," while the developer continues prototyping to create a new product for the broader market. Manufacturing re-tools easily to handle the new form factor, and analytics is modified in a "spiral" fashion to search for additional heel patterns that work, both inside and outside the organization. The new insight eventually leads company strategists to ask if foot and leg wear can be modified to encourage improvements at each stage of the human lifecycle. The company redefines its market as "humanity in motion," incorporating footwear, legwear, exercise equipment, and casts that improve human motion — and make the wearer look energetic. The targets of BI and analytics are modified accordingly, semi-automatically. And then the next insight (which used to be known, quaintly, as a "complaint") arrives.

 

The Shortcomings of Present "Agile BI"

So why do I say Dyche's approach falls short? My take is that her idea of agile BI incorporates four elements:

 

  1. Slice "business capabilities" thinly, deploy them quickly ("every 60-90 days").
  2. Agile BI is a "delivery process", a particular type of formal process for creating new BI capabilities.
  3. Organizations should accept that their BI will be upgraded in smaller increments.
  4. Make sure agile BI software development processes take the time to understand the value to the business.

Here's how I think this falls short:

 

  1. Delivering "business capabilities" in thin slices is clearly making a long-term plan, then following it with relatively little deviation. Yes, there is room for feedback from on high as to "business value," but not much consideration of information targets or information sources other than traditional ones, nor enough ability to vary functionality based on customer feedback or the evolution of user needs.
  2. Agile BI is far more than a delivery process; it is a way of thinking about every BI process, as well as BI's effects on the business as a whole. By restricting agile BI to delivery processes, Dyche's approach fails to improve the agility and effectiveness of the BI information-handling process (discussed above) and to identify new applications of agile BI that would deliver greater "bang for the buck" — like embedded analytics in NPD or "self-adjusting" business processes.
  3. If all the organization has to do is to accept that BI will be upgraded in smaller increments, it will perpetuate a mindset in which the business accepts whatever BI dishes out. Instead, the organization should be changing its mindset to "thinking agile", and the aim of the consulting organization should be to empower the organization with the agile approach so that it can operate on its own, coming back to the consultant not for the next project but for best practices.
  4. The idea that agile BI development is too focused on speed seems to me to be a dead giveaway, not only that some organizations are trying to be agile "on the cheap", but also that Dyche believes that development should listen to experts who know BI. That, in turn, can lead to development typically getting user feedback at third hand and later, rather than establishing direct connections to end users and using those connections frequently. That, in turn, will inevitably lead to long-run slower response to unexpected change, lack of proactivity, and "fighting the last war."

Over all, then, Dyche's approach allows somewhat more agility in initial BI delivery, at the price of potential long-term cementing of un-agile processes. I would predict that the initial results would be more rapid delivery of higher-quality BI — after which everyone would declare the war won and move on to the next buzzword. Few will notice that agile analytics has not been a silver bullet, since it doesn't deliver permanent competitive advantage. But real agile BI should.

 

Conclusions

One of my frustrations in assessing the horde of products that are now flooding the computing market with marketing claims of "agility" is that I feel compelled to tell both vendors and users that, in fact, their solutions are far from achieving full business agility. I am sure that the reaction is often, "What arrogance this guy has to tell me that on the basis of very little knowledge of my product/organization!"

In fact, I have a very simple "smell test" about the product or organization. If the presenter is clearly thinking in agile terms — then business agility is a real possibility. But if the presenter starts focusing on speed, or "flexibility" from modularity and interoperability, or better preparation for the future from better predictions, or better management of anticipated risks, a red flag goes up. These folks are not thinking about how to design their processes to constantly adapt to the unexpected, nor how to improve innovation by better exposure to the outside environment. They are proceeding down the same old organizational path that, system dynamics has suggested, may lead to eventual overload and breakdown of a constantly patched business process. If the organization is not thinking agile, the solution is unlikely to be agile.

The problem with agile BI today, therefore, is fundamentally a failure of imagination. Clearly, Dyche is an example of the best that agile BI has to offer; and she pays diligent attention to folks like Stephen Swoyer, who have paid their dues in the agile development community. And yet, I find it easy to generate many more ideas than are obvious in her blog on how to use BI to improve business agility; and few if any other BI folks that I hear about really seem to be generating such ideas, either. They are, instead, using a narrow idea of "agility" to bless their products and approaches.

My recommendation for buyers looking for "agile BI," then, is to brainstorm, not about what faster or deeper BI will buy you, but rather about how you can make every part of your organization agile, and then how BI can help. I have noted a few of the tools out there that should play a role — data visualization, event processing, master data management with global metadata repositories. The rest is demanding that the tool or vendor or business process fit your agility vision, not accepting that you must accommodate the shortcomings of existing tools. You already know that you can buy useful BI. If you don't try, you'll never know whether you can buy agile BI for the agile organization.

 

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved