Feeds:
Posts
Comments

Posts Tagged ‘business intelligence’

Again, this week I am gathering together a few reads that I have found to stick in my mind, for one reason or another.

The future of Analytics
The Data Warehouse Institute has a series of “Best Practice Reports”; a recent one is called Delivering Insights with Next-Generation Analytics.  It provides an analysis on the future of analysis, backed up with some survey results.  It characterises BI as central to analytics in a business context (and it’s hard to say what part of business analytics BI would not be involved in).  Reporting and monitoring remain crucial components of such activity, but TDWI places an emphasis on differentiating users of information and analytics, from production report consumers (wide in scope but terse in analytical focus) to the power user analysts and managers concerned with forecasting and modelling.  The essence of its recommendations are to provide appropriate tools to the differentiated users, and keep an eye on technology.  Although at a top level this isn’t exactly news, this report is packed with useful detail for those making an effort to keep on top of the intersection between business and technology.

The future of Data Warehouses
Although I had a look at some new technology in data warehousing recently, this second TWDI report (Next generation Data Warehouse Platforms) is necessarily more systematic.  It models the DW technology stack, outlines new technology and business drivers, intersperses user stories, and outlines emerging trends (eg appliances, in-memory, cloud/SaaS, columnar, open source, etc) not too different from my list.  Recommendations include: focusing on the business drivers; moving away from expensive in-house development; preparing for high-volume data; anticipating multiple path solutions, including open source.

In-memory databases
TDWI’s above report treated in-memory DWs seriously, without going into much detail on feasibility.  This is odd, given one of their recommendations involves preparing for an explosion in data to be stored.  I read a discussion on this technology (TDWI again: Q&A: In-memory Databases Promise Faster Results), which still doesn’t convince me that this isn’t a cat chasing its own tail.  The only realistic way forward I can see is by developing a dichotomy between core and peripheral data and functionality.  Haven’t seen that discussed.  Yet.

Forrester on trends and spotting them
Forrester has a new report aimed at Enterprise Architects: The Top 15 Technology Trends EA Should Watch.  These are grouped into five themes: “social computing for enterprises, process-centric information, restructured IT service platforms, Agile applications, and mobile as the new desktop”.  Some of it is discussed here, by Bill Ives.  Further, Forrester gives an outline of the criteria it uses for paying attention to a technology.  This includes how meaningful it is in the near term, its business impact, its game-changing potential, and its integrational complexity.

Vendor news: Oracle and bulk financials
Finally, news that Oracle has bought up again, this time taking over HyperRoll, whose software is geared for analysing “large amounts of financial data”.  Sounds a sensible move.

Advertisements

Read Full Post »

[Part one of this discussion looked at different definitions of BI, and a very salient example of how it can be done well.]

When I’ve presented to people on the opportunities inherent in business intelligence, they marvel when they see information that is directly relevant to their work, in a new and meaningful light: summarised, for example, or detailed, or with direct visual impact that promotes new insights.

That’s the easy part.  Delivery is harder.

When I need to take a step back and assess what I am doing, I ask:

What does business want out of business intelligence?

This is particularly cogent if a BI implementation is less than successful – and I’ve never seen an implementation that really, I mean really, delivers.  I’m not talking about simply analysing business requirements, but understanding what is needed to deliver effectively.

There are many different ways of answering this question.

1) The anecdotal

My experience is probably not too different from many others.  In general, the feedback I’ve had from business stakeholders is:

  • They don’t know what they want; and/or
  • They want you to do it all for them

That’s a bit glib, but later I’ll extract some value from it.  In fact, as long as you’re delivering tangible value, I’ve found the business information consumers are reasonably happy.  It’s easy enough to rest on that, but as a professional it pays to think ahead.  Unfortunately, there remains a need for a level of business commitment to information issues – and I’m not talking about getting training in the tools or the data qua data, more about adopting an information-as-strategic-resource mindset.

2) The statistical

In a recent survey run by BeyeNetwork, the top two desires of business for BI are:

  • Better access to data
  • More informed decision-making

Axiomatic, no?  These effectively say the same thing, but there is nuance in each.

On the one hand, can business get whatever information they can possibly envisage, and in a format (whether presentation or analytical input) they can use effectively?  Clearly not – that’s a moving target.  But it’s also a goal to constantly strive for.

On the other hand, for business decisions to be made, it needs to be asked: what would support them in that process?  That’s too high-level for an immediate answer from most people.  Drilling into the detail of the processes is business analysis.  Maintaining such an understanding of business processes should rightly belong with the business, who should be fully on top of what they do and how they do it.  In practice, it’s often only when prompted by necessity – such as analysing information needs – that that exercise is done with much rigour.

3) The ideal

In an ideal world we would provide the knowledge base for a worker to be truly effective – which includes not just the passive support information, but the active intelligence that can generate useful new insights.  There’s a lot that can go into this, but the wishlist includes fuller realisation of:

  • Data integration: of information from disparate sources (not just databases)
  • transformation: from data to business meaning
  • Presentation: insightful representation of information (current buzzword being visualisations)
  • Discovery: the opportunity to explore the information (discovery)
  • Timeliness: information when they need it, where they need it, no delays
  • Control: the ability to save (and share) meaning that they encounter
  • Tools: a good, intuitive user experience – no learning hurdle, no techy barrier
  • Technical integration: seamless integration with the software and hardware environment (applications, devices respectively)
  • autonomy: the ability to do it themselves

That last one is an interesting one: it’s the exact opposite of what I said I’d experienced.  But the gap there is in the toolset, the environment in which the information is presented.  If it’s something they can intuitively explore for themselves, extract meaning without a painful learning curve, they would want to do it themselves.

This can’t be achieved by the data professional in isolation.  To achieve the above needs collaborative efforts: with business stakeholders, other IT professionals, and software vendors.

I don’t think there’s any BI implementation out there that delivers to the ideal.  Better business engagement, better business commitment, more resources for BI, better software tools, better integration: these would help.

We will get a lot closer to the delivery ideal.  But by then, BI will look rather different from today’s experience.

The dangling question: are new paradigms needed for BI to be fully realised?  If it is so hard to properly achieve the potential of BI today, there must be ways of working better.

Read Full Post »

“we had the data, but we did not have any information”
– CIO to Boris Evelson (Forrester), on the global financial crisis.

Vendor marketing messages have been said to contend that only 20% of employees in BI-using organisations are actually consuming BI technologies (“and we’re going to help you break through that barrier”).

Why is the adoption of BI so low?

That was my original question, brought about by a statistic from this year’s BI Survey (8).  As discussed in a TDWI report, in any given organisation that uses business intelligence, only 8% of employees are using BI tools.

But does it matter?  Why should we pump up the numbers?  It should not be simply because we have a vested interest.

The questions are begged:

What is BI, and why is it important?
BI is more than the query, analysis and reporting from a database:

“Business intelligence (BI) refers to skills, technologies, applications and practices used to help a business acquire a better understanding of its commercial context” – Wikipedia

It’s a very broad definition.  A rather more technical one from Forrester:

“Business intelligence is a set of methodologies, processes, architectures, and technologies that transform raw data into meaningful and useful information used to enable more effective strategic, tactical, and operational insight and decision-making. . . .”

But it can be explained more simply as:

data -> information -> knowledge -> insight -> wisdom

Data can be assembled into information.  Information provides knowledge.  Knowledge can lead to insights (deeper knowledge), which can beget wisdom.  Is there any part of an organisation that would not benefit from that process?  If there are any roles sufficiently mundane that insights won’t help them improve the job, improve their service delivery, then I guess those roles would not benefit from BI.  Yet I would suggest they are few and far between, and they should be automated as soon as possible, because you can bet that employees filling those roles won’t feel fulfilled, won’t feel motivated.

Business intelligence has a part to play in that whole process above.  At the lowest level, it can provide data for others to analyse.  But at every step of the process of generating wisdom from data, BI has a part to play.  In that sense, it is both intrinsic to an organisation’s aims, and everyone has a part to play in it.

I started into this subject aiming to canvas the reasons behind poor BI takeup.  After some research and reflection on my own experiences, though, I found a whole book’s worth of material in that simple question.  So it’s not something I can lay out simply, in one take.

Demonstration
First, let’s see an example of good use of data – one, in fact, that demonstrates both the adding of value to the data, and the presentation and impartment of insight.

That wonderful organisation TED (“ideas worth spreading”) has a presentation by Hans Rosling, a Swedish professor of International Health.  Start with Rosling’s entry at TED, and look at any one of the presentations there.  The first has the most oomph, but they are all good.  Why?  Meaningful data, good presentation tools and a Subject Matter Expert.  (Thanks to Mike Urbonas for the reference).

Rosling’s presentations are a prime example of business intelligence done right.  The data was gathered from multiple sources, its quality assessed, it was assembled and presented in a fashion that gave its audience insights. In fact, the presentation tool he uses, Trendalyzer, although later bought by Google was originally developed by his own foundation Gapminder.org.  (There are similar tools such as Epic System‘s Trend Compass; MicroStrategy also has a similar tool)

Much as it might look like it, I wouldn’t say the job began and ended with Rosling.  Whatever other parts he played, here his role is SME.  Yet his presentations clearly demostrate the involvement of other roles, from data analyst to system integrator to vendor/software developer.

Barriers to BI takeup

So where to start?  Everyone has an opinion.

Rosling: “people put prices on [the data], stupid passwords, and boring statistics”.  In other words, he wanted data to be free, searchable, and presentable.  Integration and system issues aside, he found his barriers to be data availability and the expressiveness of his tools.

Pendse:  he gave a number of barriers, including “security limitations, user scalability, and slow query performance… internal politics and internal power struggles (sites with both administrative and political issues reported the narrowest overall deployments)… hardware cost is the most common problem in sites with wide deployments; data availability and software cost;… software [that] was too hard to use…”

In grouping together the issues, I found the opportunity to apportion the responsibility widely.  All roles are important to the successful dissemination of a business’ intelligence: CEO, CIO, CFO, IT Director, IT staff, BI manager, BI professional (of whatever ilk), implementation consultant, vendor, SME (too often under- or not rated!), all the way down to the information consumer.

Comments welcome.  See part two for some discussion about gaps that exist in the delivery of BI.

Read Full Post »

Interesting to read of an I.T. “glitch” in healthcare in Canada. Reported here, no doubt it made the rounds as one more example of computers gone wrong.

The story:

Recently, a doctor called Saskatoon Health Region’s medical imaging department to find out why he hadn’t received his patient’s test results. After some investigation, it was found that a fax machine had not sent it out. That machine was part of their automated Radiology Information System. Further investigation revealed that that particular machine [or the system’s link to it] had been inoperative for about ten years. Result: at least 1,380 test results had not been sent out.

On the one hand, it’s possible to say that the failure rate is low: one thousand out of about two million communications missed, a rate of only about 0.05%. And doubtless a good number of those missed communications would have resulted in followups, closing the loop the second time around.

But on the other hand, this is a medical situation, with health and lives at stake. There should be greater certainty – particularly where manual error is ostensibly eliminated.

Sure, the doctors involved are likely all highly-paid professionals. But I’ve already heard that line, and my experience working with a community of highly-paid professionals demonstrates that that does not ipso facto lead to quality outcomes.

What does this have to do with business intelligence?

BI has by now evolved to cover a wide variety of information solutions, with data and its distribution managed centrally (in theory at least), and available via a plethora of channels. What was once reports now encompasses alerts and dashboards in particular, as well as varied options for visualisation, analysis, and tools of information empowerment for business stakeholders.

But how do we know what should be there a) is there, and b) gets communicated to the right people?

When reading the above medical anecdote, I am immediately reminded of alerts in BI systems. Alerts only sound when there is something wrong. Yet the above is a good illustration of an automated system that is validated so infrequently that a full decade can pass before a breakdown is discovered. To the business manager: ow do you know you will be alerted as you expect – as you originally specified when the BI system was implemented?

My experience with databases is that information that is frequently touched (made use of) at the BI consumer end is more likely to be exposed to quality management than data that is, say, languishing unexplored until the odd occasion that it is called up – and too frequently found to be lacking.

Likewise at a meta level. Alerts are a great way of automating – outsourcing – the checking of problem situations. But for a business manager to base their functionality on the lack of alerts received really begs the question: how confident can you be that the automation is working as expected?

This level of confidence is generally outsourced – to those who assure the manager they have the answer to their needs: the BI manager (and/or implementer).

It is encumbent on the BI implementer to sufficiently test their build. Yet business changes, data changes, and it remains encumbent on the BI manager to ensure their systems maintain that degree of integrity.

Especially for alerts, which only show when there is an issue, the BI professional needs to be vigilant.

Read Full Post »

There’s been some fierce publicity (try here or here, eg) around the release of Wolfram Alpha, touted as a new kind of search engine.

But is it a search engine?

Perhaps it’s being marketed as such to appeal to the average internet user’s comfort zone – perhaps also in an attempt to redirect traffic from one of the web’s most popular sites (Google, of course).

But in concept, it would seem to be more of an intersection between Google and Wikipedia – an attempt to provide answers from its own compendium of knowledge (certified in a way that Wikipedia’s store is not).

The answers it provides are often through simple database query. Yet it intends to integrate (potentially disparate) information from different data sources, giving presentation-level answers to the user.

Sound like business intelligence?

My common understanding of business intelligence is a toolset that allows for users to query, analyse, and report from a data source. Is this not what Wolfram Alpha does?

Usually BI is understood as the navigation of an organisation’s own, internal data – but that is really only an issue of access and availability. I for one would like the opportunity to integrate internal information with any available external information (with the caveat of confidence in the quality of the source data). Wolfram clearly does not provide hermetic company information – but in a conceptual sense, could it provide what would be asked of business intelligence tools?…

Performance? Always an issue – unless the toolset is tuned well enough that there are no complaints from business stakeholders. In this case, the matter is fully externalised… rather like cloud computing, except that the end-user has no input into response times. This could be an issue for complex requirements.

The interface. The end user would seem to have little to no control over the presentation layer – a definite minus. As for query structure, it’s intended to be natural language – which can be a boon or a barrier, depending on the user. I would not be surprised to find Wolfram permitted a more syntactically pedantic query language…

And therein maybe lies a departure: we are exposed to neither a rigorous syntactical query language and – more importantly, in some ways – we do not have exposure to the format of the source data – its extent and its limitations. Thereby it becomes more an information discovery process than a deterministic information provider: if we don’t know with precision the extent or dynamism of the source data, we can not necessarily know we will get like information from like queries.

A devil’s advocate could accuse me of nit-picking. As a BI professional, I have a strong preference for understanding the source data in the development of any information system. But I find it hard to argue that that preference is a rigid necessity for the provision of business information/intelligence.

At first glance, Wolfram doesn’t seem to be geared for the strict world of BI. More broadly, I can see a space for a Wolfram appliance, sold for internal use, but with access to the external Wolfram data stores.

Why not? Such an appliance wouldn’t fill every business need. But it would address a few that are not currently met. A bit more access to data structures, query process, and presentation level, and it could compete admirably in the BI market.

So yes, in a sense I do see it as a BI tool – of sorts, and in potential. But rather, it’s in danger of expanding – exploding – our understanding of business intelligence. Perhaps taking it more to where it should be.

Postscript: I declined to mention throughout this discussion Wolfram’s limitations. It is rather appalling as a generalised search engine – unless your queries are very standard high school fare (and you’re American). Yet I believe it has a sweet spot, which again would come from having a reasonable understanding (even so far as just a listing) of its data sources.

Read Full Post »

I mentioned before that my first exposure to business intelligence was via the Brio toolset. I certainly had my frustrations with it – crashing, not being able to achieve what I wanted with the tools, and the sometimes slow response.

Crashing became far less of a problem from version 6.4. Slow query response is often enough an issue for the data source – optimising the database for OLAP is a good start. Good computer memory is also an enormous help in this area.

But then, I longed to try some of the other tools that were out there and buzzy – Cognos, for example. Little did I appreciate that it had its own frustrations. And that there were several virtues to the Brio tool. although geared only to ROLAP (relational) querying, it had all the various conceptual layers (user interface, datamodel, query, data, analysis) in a clean interface, and a pivot section that was very useful for data analysis. It is particularly useful for exploring relational tables and the data therein, although – unsurprisingly – it helps to have a good understanding of the schema under navigation.

Brio had a healthy market presence in Australia, but not as strong in its home base, the US. In the frenzy of BI consolidations over the past 10 years, it got swallowed up by Hyperion, which in turn was bought by Oracle.

Wherein lies the rub. Oracle is the software equivalent of an industrial conglomerate, and had made quite a few purchases over the years. These included, notably, Peoplesoft (a particularly hostile takeover), Siebel, Hyperion, BEA, and the newest takeover, Sun Microsystems.

Once the mergers down the line are taken into account, it should be apparent that Oracle would find itself with a lot of duplication of functionality – and it did, several times over.

Along the way, Brio was successively renamed and wrapped into other products. It found its way into Hyperion Performance Suite (as Hyperion Intelligence), then to become buried within OBIEE (Oracle Business Intelligence Enterprise Edition), as Hyperion Interactive Reporting.

Although, as HIR it faces both obscurity and competition from its OBIEE stablemates, the erstwhile Siebel Analytics, the old Brio is still getting decent writeups. The OLAP Report, referring to Brio as a “pioneer of interactional relational analysis”, praise its “ease-of-use on the one hand, and basic-level security features on the other”, positioning it at a departmental level, as an “ideal smaller scale solution”. HIR/Brio also gets mentions in IT-Toolbox from time to time, particularly in some of the discussion groups (it also has a Wiki section on Brio, mainly a set of How-Tos). There is also a BI blog with a useful post or two on it.

Indeed, the market has moved on, and BI has expanded in a number of directions. But the core of Brio, as Oracle’s Hyperion Interactive Reporting, is a zippy little tool that will certainly maintain its fans.

Read Full Post »

At the same TDWI meeting at which I was introduced to the Data Provisioning paradigm, I asked a fellow consultant: Why a Data Warehouse?

His response: why indeed? With current hardware and technology, there is no real reason to invest so much in data infrastructure: more storage, more resources is all that is needed.

That’s not really the answer, but it’s part of it.

When I was first properly introduced to business intelligence, my brief was to support the delivery and development of reportage to (some of) the business units in the organisation. The toolset was Brio, and the database was a “development” copy of the production ERP’s Oracle database. It was not a data warehouse – the relational tables remained in the normalised OLTP format – and certainly no MOLAP (cubing) was involved.

Advantages: quick turnaround time; cheap delivery. I also developed an innovative solution that pretty much amounted to the fungible data marts that Ms Heath was talking about in Data Provisioning: effectively a takeaway, disposible app-and-data file.

Disadvantage: only practical for the smaller scale. When it came to delivering a more sophisticated dashboard-type solution, the response times became quite unwieldy, and I doubt that solution was widely adopted.

Stick with an OLTP relation database? Report off a star-format data warehouse? Report off a MOLAP cube? There’s no single answer.

A manager once said to me that technology delivery could in practice, not delivery everything: You could have it fast, cheap, or reliable, or even two out of those three – but never all three. You would have to sacrifice one of the corners of this demand triangle. The solution could be fast, cheap, but not accurate. Or it could be fast, accurate – but not cheap. Or cheap, accurate, but not fast.

Depending on the context, “fast” could refer to either development time, or response time for the end user. But the point is that no single answer delivers on all of a business’ requirements. Otherwise, we’d all be doing that.

But we don’t all report from MOLAP data, nor ROLAP or HOLAP. MOLAP gives fast response times, but the development costs are higher, and build time is, too (which can be an issue when timely data is needed). ROLAP solutions give slower user response, but can report off near-realtime data. HOLAP, more of a balance between response and build times is a compromise that can be good for non-recent data in particular.

As for a Data Warehouse, it can fulfill several purposes (but it’s important to note that not all such purposes are intrinsic to the consequent star schema). Yes, it costs in terms of development time and ETL. But the denormalised star schema is better suited for query transactions (as opposed to adds, updates, deletes). The different logical format can also be easier to navigate – although if there are meaningful but more complex ways to navigate the OLTP database, they can easily be lost.

And a data warehouse is at once a) a repository for data from multiple sources; b) a locus for the enforcement of corporate data governances; and c) an opportunity to apply some data cleansing. This is not to mention the dimensional cubes that can be built off it for even faster analytical processing.

I’m not wedded to cubes, or even data warehouses. I retain a natural suspicion of any transformation that obscures meaning behind the original data – although cleansing can be involved, ideally I would like the ability to navigate – when necessary – the original formats and (sometimes dirty) values. Yet on the other hand, I love the theoretical opportunity to draw data together from multiple sources, clean it, and apply corporate data governance policies.

But even a data professional can’t always have everything.

Read Full Post »

Older Posts »