AgileBI: How Corporate culture influences the development approach

In this blog post I discussed various building blocks which can help to establish and manifest an agile approach to Business Intelligence within an organisation. In this article I will focus on the aspect “Agile Mindset & Organisation”. The probing question is, which approach is best suited to develop a data warehouse (= DWH; not just the BI Frontend) incorporating the agile principles. Closely linked to this matter is the question of cutting user stories. Is it sensible to size a user story “end to end”, i.e. from the connection of the source, staging area and core DWH all the way to the output (e.g. a dashboard)? As you can imagine, the initial answer is “it depends”.

When looking at the question more closely, we can identify multiple factors that will have an impact on our decision.

Organizational structure

First of all, I would like to point out the distinction of the following two cases of the development organisation:

  • Organizational Structure A) DWH and BI Frontend are considered one application and are developed by the same team.
  • Organizational Structure B) DWH und BI Frontend are considered as separate applications (while being loosely coupled systems) and developed by different teams.

Characteristics of the Organizational Culture

In a next step we need to differentiate between two possible characteristics of organizational culture, which are extracted from Michael Bischoff’s book “IT Manager, gefangen in Mediokristan”, (engl.: IT Manager, trapped in Mediokristan). A nice review of the book can be found in this blog entry (in German).

  • „Mediokristan“: The „country“ Mediokristan is described as a sluggish environment where hierarchies and framework are predetermined, and risk- and management concepts are dominating factors. It stands symbolically for the experiences I have made in large corporate IT organizations. Everything moves at a slow pace, cycle times tend to be high; mediocrity is the highest standard.
  • “Extremistan”: The “country” Extremistan is best described as the opposite of Mediokristan. New, innovative solutions are developed and implemented quickly, fostered by individual responsibility, self-organization and a start-up atmosphere. Its citizens strive for the extreme: extremely good, extremely flexible, regardless of the consequences. What works will be pursued further, what does not will be discontinued. Any form of regulation or framework is rejected extremely too in case they are seen to hinder innovation.

Needless to say, the two cultures described are extremes, there are various other characteristics between the two ends of the spectrum.

Alternatives to agile development approaches

The third distinction I would like to point out are the following two alternative approaches to agile development:

  • Development Approach A) Single Iteration Approach (SIA): A number of user stories is selected in the beginning of an iteration (called a Sprint in the Scrum jargon), with the goal to have a potentially deployable result at the end of one iteration. With Organizational Structure A in place (see above), the user story will be cut “end to end” and has to encompass all aspects: Connecting the required source system (if not available already), data modeling, loading of the data into the staging layer, EDWH and data mart layer and last but not least creating a usable information product (e.g. a report or a dashboard). The processing of the user story also includes developing and carrying out tests, writing the appropriate documentation, etc. It is a very challenging approach and will generally require a Team of T-Skilled People, where each member of the team possesses the skills to manage and fulfil any and all of the upcoming tasks over the time.
    Folie4_SIA
  • Development approach B) Pipelined Delivery Approach (PDA): The PDA exists because of the assumption that the SIA is illusive in many cases. One reason is the lack of T-Skilled people in our specialist driven industry, or the involvement of multiple teams (e.g. separate teams of Business Analysts, Testing) in the development process in extreme cases. Another reason is the mere complexity we often see in DWH solutions. An iteration cycle of two to four weeks is already quite ambitious when – in the figurative sense – doing business in Mediokristan.
    As an alternative, PDA describes the creation of a DWH/BI solution based on a production line (also see the book written by Ralph Hughes: “Agile Data Warehousing Project Management: Business Intelligence Systems Using Scrum, Morgan Kaufmann, 2012). The production line (=pipeline) consists of at least three work stations: 1. Analysis & Design, 2. Development, 3. Testing. In the shown illustrations below (taken from a concrete customer project) I added a fourth station dedicated for the BI Frontend development. A user story runs through each work station in one iteration at the most. Ideally, the entire production line is run by the same team, which we will assume in the following example:

    • The user stories that are to be tackled in the early stages are defined and prioritized during a regular story conference at the outset of a production cycle. They will thereafter be worked on in the first work station “Analysis & Design”. Evidently, in this initial phase, the pipeline as well as some members of the team are not used to full capacity. In accordance with the Inception Phase in the Disciplined Agile approach, these gaps in capacity can be used optimally for other tasks necessary at the beginning of a project.
    • After the first iteration, the next set of user stories that will be worked on will be defined at the story conference and passed on to the first work station. Simultaneously, the user stories that were worked on in the first work station in the previous iteration will be passed on to the next one, the development station.
    • After the second iteration, more user stories will be chosen out of the backlog and passed on to the first work station, while the previous sets will move further along the pipeline. Thus, the user stories of the first iteration will now be worked on in the testing work station. (One word re testing: Of course we test throughtout the development too! But why would we need a separate testing iteration then? The reason lies within the nature of a DWH, namley data: During the development iteration a developer will work and test with a limited set of (test) data. During the dedicated testing iteration full and in an ideal case production data will be processed over multiple days; something you can hardly do during development itself).
    • Consequently, the pipeline will have produced “production ready” solutions for the first time after three iterations have passed. Once the pipeline has been filled, it will deliver working increments of the DWH solution after each iteration – similar to the SIA.
      Folie5_PDA1

      Pipelined Delivery Approach with four work stations

      Folie7_PDA3

      Testing inside one pipe cycle

The two approaches differ in the overall lead time of a user story between story conference and the ready-made solution, which tends to be shorter using the SIA in comparison to the PDA. What they have in common is the best practice of cutting user stories: They should always be vertical to the architecture layers, i.e. “end-to-end” from the integration of a source system up to the finished report (cf. organizational structure A) or at least the reporting layer within the DWH (cf. organizational structure B). While the PDA does incorporate all values and basic principles (this topic might be taken up in another article) of agile development, in case of doubt, the SIA is more flexible. However, put in practice, the SIA implementation is much more challenging and the temptation of cutting user stories per architectural layer (e.g. “Analysis Story”, “Staging Story”, “Datamart Story”, “Test Story”) rather than end-to-end is ever present.

When is which approach best suited?

Finally I would like to show where and how the above mentioned aspects are in correlation with each other.

Let’s have a look at “Organizational Structure A) DWH and BI Frontend are considered one application”. If the project team is working out of Extremistan and consists of mostly T-skilled team members, chances are that they will be able to implement a complete DWH/BI solution end-to-end working with the SIA. The success of the project with this approach is less likely, if large parts of the team are located in Mediokristan. Internal clarifications and dependencies will require additional time in matters of Analysis and Design. Furthermore, internal and legal governance regulations will have to be considered. Due to these factors, in my experience, the PDA has a better chance at success than the SIA when developing a DWH/Frontend in Mediokristan. Or, to put it more positively: Yes, Agile Development is possible even in Mediokristan.

The situation I have come across more often is „Organizational structure B) DWH and BI Frontend are considered two separate applications“. As a matter of principle, agile development with the SIA is simpler in the BI Frontend than the DWH backend development. That being said, the environment (Mediokristan vs. Extremistan) also has a great impact. It is possible to combine the two approaches (PDA for the DWH Backend, SIA for the BI Frontend), especially if the BI Frontend is already connected to an existing DWH and there is no need to adapt it according to every user story in the BI Frontend. Another interesting question in Organizational structure B) is how to cut the user stories in the DWH Backend. Does it make sense to formulate a user story when there is no concrete user at hand and the DWH is, in the practical sense, developed to establish a “basic information supply”? And if yes, how do we best go about it? An interesting approach in this case is the Feature Driven Development (FDD) approach, described in this Blog article of Agile-Visionary Mike Cohn. The adaption of the FDD approach when developing a DWH might be interesting material for a future article…

As you can see, the answer “It depends” mentioned at the beginning of this blog post is quite valid. What do you think? What is your experience with Agile BI in either Mediokristan or Extremistan? Please feel free to get in touch with me personally or respond with a comment to this blog post. I look forward to your responses and feedback.

(A preliminary version of this blog has been posted in German here. Many thanks to my teammate Nadine Wick for the translation of the text!)

Advertisements

Steps towards more agility in BI projects

“We now do Agile BI too” – such statements we hear often during conferences and while discussing with customers and prospects. But can you really do agility in Business Intelligence (BI) and data warehouse (DWH) project directly? Is it sufficent to introdouce bi-weekly iterations and let your employees read the Agile BI Memorandum [BiM]? At least in my own experience this doesn’t work in a sustainable way. In this post I’ll try to show basic root cause relations which finally lead to the desired agility.

DWHAutomation

If at the end of the day we want more agility, the first step towards it is “professionalism”. Neither an agile project management model nor an agile BI toolset is a replacement for “the good people” in project and operation teams. “Good” in this context means, that the people who work in the development and operation of a BI solution are masters in what they do, review their own work critically and don’t do any beginner’s mistakes.

Yet, professionalism alone isn’t enough to reach agility in the end. The reason for this is that different experts often apply different standards. Hence the next step is the standardization of the design and and development procedures. Hereby the goal is to use common standads for the design and development of BI solutions. Not only within one team, but ideally all over team and project boundaries within the same organization. An important aid for this are design patterns, e.g. for data modeling, the design and development of ETL processes as well as of information products (like reports, dashboards etc.).

Standardization again is a prerequisite for the next and I’d say the most important step towards more agility: The automation of as many process steps as possible in the development and operation of a BI solution. Automation is a key element – “Agile Analytics” author Ken Collier dedicateds even multiple chapters to this topic [Col12]. Because only if we reach an high degree of automation we can work with short iterations in a sustainable way. Sustainable means, that short iterations don’t lead to an increase in technical depts (cf. [War92] and [Fow03]). Without automation, e.g. in the areas of testing, this isn’t achievable in reality.

Now we are close to the actual goal, more agility. If one can release new and changed features to UAT e.g. every two weeks, these can be released to production in the same manner if needed. And this – the fast and frequent enhancement of features in your BI solutions is what sponsors and end users perceive as “agility”.

(this blog was originally posted in German here)

Event hints:

Literature:

[BiM] Memorandum for Agile Business Intelligence: http://www.tdwi.eu/wissen/agile-bi/memorandum/

[Col12] Collier Ken: Agile Analytics, Addison-Wesley, 2012

[War92] Cunningham Ward: The WyCash Portfolio Management System, http://c2.com/doc/oopsla92.html, 1992

[Fow03] Fowler Martin: Technical Debt, http://martinfowler.com/bliki/TechnicalDebt.html, 2003

Testing for BI & DWH

Since ever testing is part of every IT project plan – that’s true as well for Business Intelligence (BI) & Data Warehouse (DWH) projects. The practical implementation of testing in the BI / DWH environment has confronted me with troubles in the past again and again. Often I’ve had the impression that the BI / DWH world is still back in the Stone Age regarding development processes and environments. At least it is significantly behind the maturity level I know from the software engineering domain. The below chart illustrates this gap:

rbra_testing1 (1)

Cultural differences between the software development and BI community (Source: http://agiledata.org/essays/culturalImpedanceMismatch.html)

If there is something tested at all, typically in the BI frontend area things are tested manually. In the DWH backend we see – besides manual tests – self coded test routines, e.g. in the form of stored procedures or dedicated ETL jobs. However the integration into a test case management tool and systematic evaluation of the test results doesn’t happen. This is heavily contrasting with the software engineering domain where automated regression testing combined with modern development approaches like test driven design are applied. At least for some time we find first inputs regarding BI specific testing (cf. the (German) TDWI book here). Concepts and paper are patient though. Where are we with regard to a possible tool support, namely for the area of regression tests?

Since summer 2014 we at IT-Logix are actively looking for better (tool based) solutions for BI specific testing. We do this together with the Austrian company Tricentis. Tricentis develops the Tosca product suite, one of the worldwide leading software solutions for test automation. In a first step we run a proof of concept (POC) for regression tests for BI frontend artefacts, namely typical reports. One of the architectural decisions was to use Excel and PDF export files as a base for our tests. With this choice of a generic file interface the efforts to develop BI product specific tests were omitted. And this way we reduced the implementation effort to about two days in the POC. The goal was to run “Before-After” tests in batch mode. We took 20 reports for the POC case (these were actually SAP BusinessObjects Web Intelligence reports, but you can imagine whatever tool you like as long as you can export to PDF and / or Excel). A current version of the PDF or Excel output of the report is compared with a corresponding reference file. Typical real life situations where you can use this scenario are:

  • recurringly scheduled regression tests to monitor side effects of ongoing DWH changes: The reference files are created somewhen e.g. after a successful release of the DWH. Imagine there are ongoing change requests on the level of your DWH. Then you want to make sure these changes only impact the reports where a change is expected. To make sure all the rest of your reports aren’t concerned by any side effects, you now run your regression tests e.g. every weekend and compare the hereby produced files with the reference files.
  • BI platform migration projects: If you run a migration project for example to migrate your SAP BusinessObjects XI 3.1 installation to 4.1, you’ll want to make sure reports still work and look the same in 4.1 as they did in XI 3.1. In this case you create the reference files in XI 3.1 and compare them with the ones from 4.1. (As the export drivers vary between the two versions, especially the Excel exports are not very useful for this use case. Still, PDF worked pretty fine in my experience)
  • Database migration projects: If you run a database migration project for example migrating all your Oracle databases to Teradata or SAP HANA, then you want to make sure all of your reports show still the correct data (or at least the data as was shown with the original datasource…)
rbra_testing1 (3)

Sample configuration of a test case template using the GUI of Tosca (Source: IT-Logix POC)

Tosca searches for the differences between the two files. For Excel this happens on a cell by cell basis, for PDF we used a text based approach as well as an image compare approach.

rbra_testing1 (2)

Depending on the chosen test mode the differences can be visualized differently (Source: IT-Logix POC)

Using the solution implemented during the POC we could see very quickly which reports were different in their current state compared to the reference state.

Another important aspect of the POC was the scalability of the solution approach as I work primarily with (large) enterprise customers. If I have not only 20 but hundreds of reports (and therefore test cases), I have to prioritize and manage the creation, execution and error analysis of these test cases somehow. Tosca helps with the feature to model business requirements and to connect them with the test cases. Based on that we can derive and report classical test metrics like test case coverage or test execution rate.

rbra_testing1 (1)

Requirements and test cases are tightly related (Source: IT-Logix POC)

In my eyes an infrastructure like Tosca is a basic requirement to systematically increase and keep the quality in BI / DWH systems. In addition advanced methods like test driven development are only adaptable to BI / DWH undertakings if the necessary infrastructure for test automation is available.

In this blog post I’ve shown a first, rudimentary solution for regression tests for BI frontend tools. In a next article I’ll show the possibilities to implement regression test for DWH backend components.

Event recommendation: Learn about a real life scenario to run a SAP BusinessObjects migration project in an agile manner. Hence test automation is key and explained in some more details. Join me during sapInsider’s BI2015 at Nice by mid of June. Find more information here.

(This blog post was first published by me in German here)

The bug paradox: When fixing the bug leads to wrong reports

My workmate Christoph Gnodtke wrote an excellent blog about how to identify SAP BusinessObjects Web Intelligence reports which are impacted by various calculation changes in newer BO versions. What I would like to point out here is that not only BO 4.x migrations are concerend but also “simple” service / support package upgrades e.g. from XI 3.1 SP2 to SP6. In my current customer case we’ve found many many reports which obviously were created in a wrong way, namely that the table structure contains the merged dimension (e.g. [Merged Country]) where as the cells within the row use a variable containing e.g. a Where operator using the original dimension ([Query1].[Country]). In our case the business requirement would have been to use the merged dimension here as well. As outlined here, in former BO support package levels a bug resulted in the effect, that the just mentioned example still showed what the business expected. Now (e.g. in XI 3.1 SP6) that the bug is fixed, the reports start to show wrong values. Although the software 360Eyes doesn’t solve the problem, it at least helps to identify concerned reports. Unfortunately we still need to look into every single report and compare between the version running on the XI 3.1 SP2 environment and the SP6 environment. In order to speed up this process we use 360Cast. This software provides similar features like BO Publications e.g. for report scheduling and bursting. The main advantage namely in the case of report testing are two fold (compared to BO out of the box features):

  1. Report selection for a schedule job can be done using good old BO categories. That means you can assign e.g. a test category to all reports you want to test in one single run. In our customer case we use categories for each data mart. In 360Cast, instead of choosing every single report individually, we just choose to select all reports of this test category.
    CategorySelection
    In order to run all these reports with one single click there is just one thing missing: Providing all the necessary prompt values, often the same values for the same prompts (like Year) over many reports. This is where the second advantage comes into play:
  2. To provide prompt values 360Cast accepts both manual input values (where a value can be applied to a all prompts with the same name) but also values from an Excel sheet (or even from an SQL query). We usually use the Excel alternative. Based on this we can easily vary input parameters for different test purporses by simply using another Excel sheet. In addition we can specify the export format and the recipients, e.g. by providing an email address.
    PromptSettingMapping
    (The values in the drop down menues correspond to the columns in the underlying Excel spreadsheet)

After all, also 360Cast doesn’t solve the initial problem. But at least we don’t need to run every report (identified by 360Eyes earlier) on its own but can automate the refresh process and we can easily rerun reports (e.g. with different prompts by simply modifying the values in the Excel list).

The Generic BI front-end Tool Selection Process

Finally – I’m blogging again. Time flew by and “my life as a BI consultant” kept me busy with migrations from Oracle to Teradata or from BO XI 3.1 SP2 to SP6. Or maybe you’ve also heard about our BusinessObjects Arbeitskreis (can be translated as “workshop”), I’d call this the “only BOBJ dedicated conference in Europe”: www.boak.ch It was a pleasure to build and execute an interesting agenda for our participants as well as welcoming great people like Jason Rose, Mani Srini, Saurabh Abhyankar, Mico Yuk or Carsten Bange. I also had the pleasure to do the closing key note during BOAK. It was around BOBJ front-end Tool Selection. I used this opportunity to further develop a generic yet simple method how to approach the frontend tool selection. The basic idea I formulated already in my last blog post back in April 2013. But I agree with some of the comment writers that this first rule of thumb was maybe to specific to be applied in all situations. Therefore let me share what I think is a more generic approach – by the way you can use this of course for other BI vendors and not only SAP BusinessObjects (the following illustrations are just examples – the listed and selected tools don’t have any concrete meaning!):

PART A: Preparation
Step 1: List all available BI front-ends

The first thing to do is to get an overview what BI front-end tools are available in general from a specific BI vendor. As I’m a big fan of working interactively with people, e.g. gathering in front of a whiteboard or flipchart, I suggest you write down product names to sticky notes and post them on the flipchart:

Step01

Step 2: Divide tools into “in scope” and “out of scope”

Depending on your environments you can do a first, yet very rough tool selection and divide the intially listed tools (see step 1) into two groups:

“Out of Scope”: This is maybe easier to start with: If you don’t have SAP BW as a source, you can eliminate all tools working with BW only. Or if your security policy prohibits the use of Flash, maybe Explorer or Xcelsius are out of scope a priori.

“In scope”: All the tools which are not out of scope.

Step02

PART B: Build a working hypothesis
Step 3: Select the tool which covers most of your requirements

This step assumes that you have quite a clear understanding of your business needs to be solved with a BI solution. I’m fully aware that this is often not the case. But to keep the basic process for tool selection as simple as possible I won’t go into details about how to find the “right” requirements. Not yet, but maybe in a further blog post.

Anyway, let’s represent the total amount of requirements symbolically as a circle. Now think about which tool has the broadest coverage of your requirements. Take the sticky note and put it onto the circle. Please be aware that this is only a “working hypothesis” – trust your gut feeling – you can always revise your tool choice later on in the process.

Step03

Step 4: Select the tool which covers most of your left over requirements

Repeat step 3 and think about the tool which might cover most of your left over requirements and put the corresponding sticky note into the circle.

Step04

PART C: Validate your working hypothesis

Nothing is more annoying than “strategies” which exist only on paper but cannot be transformed into reality. Keep in mind that you’ve just built what I call a working hypothesis. Now you should validate it and test against the reality. It will either prove your gut feeling regarding tool selection was right or wrong.

So far you have selected two tools. They represent a selection hierarchy. For any given or new requirement (or group of requirements) you should now do a hands-on test. Always start with the first chosen tool: How well can you implement the requirement? Does the implementation fullfil your expectations? What do your end users think about it? Do they like it? For now I leave it up to you to define the “success criteria” to decide in which case a prototypic implementation passes the hands-on test and when not. Anyway, if the implementation passes the hands-on test, you should go with tool #1 for this kind of requirement now and in future situations.

If the implementation fails the hands-on test for tool #1, go forward to tool #2 and do a hands-on test again with this one. Hopefully your prototypic implementation now passes the test and you can define to go with tool #2 for this kind of requirement now and in future situations.

Step05

What happens if a prototypic implementation fails the second hands-on test too? There are three alternatives:

  • If you fail the second hands-on test for let’s say <10% of requirements, you should think about a specific solution for these obviously very special requirements: Mabye you simply continue to solve these requirements “manually” in Excel? Maybe you need to buy a niche tool for it? Just find a pragmatic solution case wise.
  • If you fail the second hands-on test for let’s say <30% of requirements, you should think about adding a third tool to your tool selection hierarchy.
  • If you fail the second hands-on test for let’s say <60% of requirements, you should definitely revise your working hypothesis and play through another tool selection hierarchy.

Closing Notes

I’m fully aware that the outlined process is simplistic. That’s why you might not be able to use it “as is” in your current frontend tool selection project. But it shows the basic idea (namely to build a tool selection hierarchy and validate it with hands-on tests) on how to narrow down the number of useful tools in a given context – and it is your job to apply respectively adapt it to your environment. Let me know what you think about it – and how it works in your environment!

The Rule of Thumb for BOBJ Tool Selection

What is the right SAP BusinessObjects frontend for a given situation? A question I’m asked nearly every day. When I was confronted first with this topic  a few years ago the taken approach was a highly sophisticated Excel spreadsheet in order to assess all available BOBJ tools based on a feature list. The only problem was: At the bottom line there was never a clear winner. Next approach were the famous decision trees like the following:

BO-Tool-DecisionTree

Not bad as a first guess. And in an ideal world where the basic functionality would be the same for all BOBJ tools such a tree could work indeed. But given the situation that even today – nearly ten year after the aquisition of Crystal by BO – support for universes is still not exactly the same in Webi, Crystal Reports and Xcelsius (aka Dashboards) and especilly the maturity of a tool or a sub component of it is vastly different, there is no clever way to tell you which tool to use for which purpose.

Although you can’t give a distinct answer to the question “which tool to use for what”, I’m convinced that the following rule of thumb will be valid in most situations and for a majority of organisations – the only assumption is that there is no limitation out of licensing. That means I assume you have a license for all or at least the most important frontend tools. The idea behind this rule is that a priority rating is more helpful than a feature or use case driven decision tree.

Here is my rule of thumb:

  1. Try it with Web Intelligence
  2. If Webi didn’t work, try it with Crystal Reports
  3. If Crystal Reports didn’t work, try it with one of the “niche” tools

Let me share some thoughts about this priority list:

Why should we start with Web Intelligence? There are various reasons for this:

  • From a features perspective Web Intelligence provides the most widest range in the BOBJ tool suite. You can use Webi for creating classical standard reports, you can use it for dashboard like applications (think about Input Controls and the ease of use regarding drilling – e.g. compared to Xcelsius…), you can use it for self-service reporting, you can use it as a data pump using XLSX export or interface to other applications using BI Web Services etc.
  • From a maturity perspective it is one of the most stable and mature applications in the BOBJ world. I tell you this as an native “Crystal guy”. But whereas Crystal Reports 2011 runs stable the same way as it did for the last decade, the new Crystal Reports for Enterprise is just crap compared to both, the legacy CR and Webi.
  • From a data source perspective: Webi is the only tool which fully supports all kind of Universe stuff. I’ve never heard of any limitation that Webi would not support something what you can do in a Universe (by design). But let me compare this to Crystal Reports: On one hand you can use only UNX universes in CR4Ent, on the other not all type of queries are supported. Crystal still has the limitation that if a universe query results in multiple SQL statements it fails to handle it as there is no local “micro cube” as with Webi. Of course this whole argument implies that we value a “common semantic layer” to be of high “added value” to an organization and therefore should be supported in its full scope. But there is even more to add: Webi handles not only multiple SQL result sets per query, it can also leverage multiple queries and easily join them. Although I’m not a friend of “merged dimensions”, there are many situations where this capability is the only work around to get the job done at the end of the day (and not three monthes later when the data finally arrived in the DWH…). No clever way to do this in Crystal Reports or Xcelsius directly.
  • From an SAP BW perspective: Two or three years ago we had to decide for Crystal Reports often because of its better connectivity to SAP BW and all around it with hierarchy handling etc. These days are “passé”. My most recent experience with Webi using the BICS interface are very promising. Totally in contrast with CR4Ent which crashes regularly, even with the latest patch level.
  • From a usability perspective: Although SAP currently tries to position Webi to be the tool where business users develop the reports, I think its usability is equivalently valubale for IT folks too. Report development is quick and straight forward – once you’ve got used to the ribbon style menues 😉
  • From an installation footprint perspective: Given the situation that SAP releases new patches nearly every third or four week, patching client installations is an nightmare. The more valuable are fully web based deployment scenarios. Therefore once again, Webi is the favorite.

Still, Web Intelligence has some short comings. That’s why you should evaluate Crystal Reports in a second instance:

  • One of the major differentiators between Crystal Reports and all the other frontend tools is Conditional Formatting. As you may know Crystal Reports has a powerful formula language integrated. This formula language can be used to control neary every property you can set in Crystal Reports. This way you can implement what I call “guided interactivity” at its best: Let the end user choose some parameter values and use these values to control both, the data in the report but especially the layout too. The typical use here is: A customer wants to build 10 similar reports. They are not exactly same regarding the layout, but similar. For example, in Webi there is no straight forward way to show conditionally show or hide some parts of the report. In Crystal Reports such a thing is a no-brainer.
  • Interactive / proactive Alerts: As of today, only Crystal Reports based alerts can be used to send an email notification if they are triggered.
  • Export formats: Crystal Reports has a multitude of available export formats, including Word or XML, which aren’t available in any of the other tools.
  • Hierarchical Grouping for relational data sources: Crystal Reports can dynamically resolve a Child-Id-to-Parent-Id relationship and apply calculations over such a hierarchy.

But before you choose Crystal Reports remember there are two versions of Crystal Reports: The legacy Crystal Reports 2011 and Crystal Reports for Enterprise. The first one is mature and stable, but does not contain new features introduced only to CR4Ent. On the other hand, CR4Ent is a de facto “1.x” product regarding its code maturity. For now I simply cannot recommend to use it as your major reporting tool without intensive testing of your own use cases in your environment. On the other hand – depending on your situation – the legacy Crystal Reports does not support UNX universes at all nor does it support UNV universes as you’d expect it coming from Webi.

What about all the other tools? I call them “niche tools”. This is due to the fact that all of them have quite a narrow scope of application compared to the “generalists” Webi and Crystal, let me name a few:

  • SAP Visual Intelligence: This is a great tool for ad-hoc-analysis. But that’s it. No way (yet) to publish documents online (except over Explorer), schedule them or create more sophisticated standard reports.
  • Explorer: Not the most mature product, especilly in the context of SAP BW and BWA as a datasouce… In general, Explorer is nice for “standard” visualizations. But have you ever tried to customize even basic elements of these charts? Or have you tried to add a simple table into an Exploration View? Or export an Exploration View as a whole? As of today these basic things seem to be impossible…
  • Analysis, Edition for OLAP: Limited to OLAP data sources, no clever integration into scheduling, publishing etc.
  • Analysis, Edition for Microsoft Office: Only BW support…
  • Dashboards / Xcelsius: Limited capabilities in terms of data volume that can be processed, no straight forward way to realize drill downs, no common export formats, no full Universe support, no scheduling capabilities…
  • Design Studio: Not usable for productive environements in the current version 1.0, and even for subsequent versions I’m very sceptical… In addition the scope of the tool is focused on BI App development which as such is clearly a niche.

This doesn’t mean that these tools are not valuable in the context of specific requirements. But assuming that there is a value in reducing the number of used and supported tools to a minimum, these tools should be chosen only after having evaluated Webi and Crystal beforehand. According to my experience chances are quite high that your requirements can be covered by one of these two tools.

What is your experience with tool selection? Would you agree with my rule of thumb? Anything I missed? Looking forward to reading your comments!

Testing BO BI 4.x using the cloud

Update End of December 2012: Currently the ITX Migration and Demo Environment on Cloudshare is not available anymore for public parties. IT-Logix customers of course can still apply for a shared copy of it. The reason why I have to end the public offering is due to increased workload on one hand. On the other hand we need the current environment for our customer projects. Unfortunately Cloudshare did not respond to my request to offer us a free environment soley for the purpose of sharing our migration environment.

(Update October 15th 2012 –> current machine list)

(Update October 28th 2012 –> Patch 4.5 installed)

(Update October 31st 2012 –> graphomate inlcuding demo dashboard installed –> see Cloudsrv012 in folder graphomate – here you’ll find the user manual too; or open Dashboards on Cloudclnt01 and drag n’ drop the graphomate component to the dashboard to test it yourself!)

While I discussed general migration challenges in my previous blog, this blog addresses the fact that every new release (even just a service / support package) of SAP BusinessObjects needs intensive testing (by the way I’m not talking about versions in Ramp-Up but the regular available versions like currently BO 4.0 SP2). SAP seems to work based on the banana principle:

The product ripe with the consumer.

I could now elaborate on how bad this is and how much better other vendors do (do they really?). But I won’t. I would like to share an opportunity of how you can better cope with the circumstance that you have to test, test and once again test whatever you do with SAP BusinessObjects before you “go live”.

When SAP provided its HANA developer environment to partners and customers I came first to know cloudshare. In the meanwhile I’m quite enthusiastic about cloudshare! It was never easier (and cheaper) for me to create development and test environments having the choice out of a multitude of machine templates and afterwards full admin rights on all machines. But the best thing about cloudshare is that you can easily share a virtual server environment with others for free (at least for a first period of two weeks).

This inspired me to create what finally was named the “ITX BO 4.x Migration Assessment and Demo Environment”. This is a virtual server environment in the cloud. It allows for quick and easy to use «hands-on» tests of current and upcoming releases of SAP BusinessObjects BI products. You can import parts (or everything) of an existing BO content from your XI 3.1 system into the XI 3.1 system in the cloud (using BIAR-Files). Afterwards you can test a migration to BO BI 4.0 SP4 (or you can use BO 4.0 SP4 simply for its own sake) You can get your own copy of the environment for free during two weeks. Afterwards you need a cloudshare.com subscription to further use it.

The environment also includes an installation of the products 360View+ and 360Eyes from GB and Smith (www.gbandsmith.com). I highly recommend these two products in order to streamline your migration. There will be another blog where I will detail on this.

The 4 Available Machines

The Migration Assessment & Demo Environment consists of four machines:

  • BO XI 3.1 SP3 (Server + Client Tools + 360View + 360Eyes)
  • BO BI 4.0 SP4 Patch 5 (Server + Client Tools + Visual Intelligence + 360View + 360Eyes)
  • BO BI 4.0 SP4 Patch 5 (Client Tools, Crystal Reports 2011, Crystal Reports for Enterprise, Dashboards etc.
  • BO DataServices 4.1 + Information Steward

Request your Free Copy

Please contact me to share with you a copy of the current migration environment. You’ll find my contact information in the PDF here or use Twitter with @rbranger.
Please give me some key words why you’d like to use the environment and allow up to two working days to grant you access to a copy of the system.

You’ll receive an invitation email directly from cloudshare.com including a link.

Register on Cloudshare.com

Afterwards you need to open a free account on cloudshare.com:

After your successful registration please log in to Cloudshare ProPlus. Your environment is already starting up… Click on «View environment» to see more details…

Wait until all machines are up and running. In the meanwhile read the description and get familiar with machine names etc.

Let’s «own» the environment. Click on the corresponding button! On the right side you have now much more options available. The cloudshare.com license is now valid for a longer time than only the original two days.

Testing BO BI 4.0 SP4

Let’s start with using the client tools and BI Launchpad of BI 4.0 SP4. Select «Fullscreen RDP» from the drop down menu of «CLOUDCLNT012»:

The password of the BOE Administrator is always IT-Logix32
The SP4 CMS is running on cloudsrv012 on default port 6400
Here some helpful links:
Open the BI Launchpad at http://cloudsrv012:8080/BOE/BI

Find shortcuts to the available client tools on the desktop or in the start menu.

Cloud Folders

If you need to upload files (e.g. a BIAR file with your own BO content), use «Cloud Folders» to upload files using FTP:

On the virtual machine you’ll find a shortcut on the Desktop to access your cloud folders:

Have Fun and Happy Migration!

This is it. I hope you find this new opportunity useful. At least for 14 days you can use the environment for free. Afterwards you need to purchase a subscription at cloudshare.com. By the way this is nothing expensive and I wouldn’t give back mine… Regarding BO and 360 licenses there are only temporary keys part of the environment. I recommend that you use your own keys. In case you have no keys but would like to test drive BO or 360 products please contact me for an extended trial period.

My own cloudshare.com environment which is the base for the Migration Assessment Environment is sponsored by my employer IT-Logix. Please consider IT-Logix if you need dedicated expertise for your next BO migration project.