Speed Up Your Essbase: Optimizing Data Management

In a recent article I discussed some Essbase design ideas to get your updates to run faster. I covered a few fundamental Outline design and calculation approaches, and touched on the other data storage options you have in addition to the legacy Block storage – namely, Aggregate Storage (ASO) and Hybrid.

Here I list some more ideas that could help speed up the data loading and calculation time of your Block storage cubes. I’m categorizing these as “data management” optimizations. You can use them in combination with the Outline and Calc Script ideas from the other article, to really speed things up.

 

Optimize your data file format: The fastest way to load data from each flat file or SQL extract is to load into a given data block and not return to that block later to load more data. The way to assure that you pass through your blocks only once is to sort your data records so that your sparse dimension columns are like a car odometer – as you read down the file, the fastest changing sparse dimension members are on the right, slowest changing on the left.

Remove old data: As data accumulates over time calculations can slow down, so it’s a good idea to remove old data that isn’t needed any more. Before removing data, you can preserve it by copying the cube in Essbase to create an archive cube that can be accessed when needed. Good candidates for data removal are old prior years of actuals, and old forecast and budget scenarios/versions. Even if you use “Fix” statements in your Calc Scripts to avoid this data during calculations, there is still performance overhead just having more data in the cube.

Delete unused Outline members: Sometimes the metadata used to build the Outline contains far more members than you actually populate with data. Examples are GL accounts that were opened but never used, or products or customers that contain no data for the relevant timeframe of your application. If you have at least a few unused dense members, or lot of unused sparse members, the impact on performance could be noticeable.

“De-frag” the cube regularly: Like a computer’s hard drive, Essbase cubes experience fragmentation as a result of loading, calculating, deleting and saving data over and over again. There are a few defragmentation methods available. Clearing and reloading all data may be an option for smaller cubes – you can dump the data by exporting it, then reload from the export file or from existing source data files. Alternatively, ”Force Restructure” within the MaxL “Alter Database” command restructures the database to eliminate or reduce fragmentation without clearing and reloading.

Consider separate cubes: Because Essbase calculation time tends to increase exponentially as the number of dimensions and amount of data increase, splitting a large cube into two or three simpler ones might get you faster total update time. The Partitioning feature or using the @XREF or @XWRITE calc functions are ways to link cubes together to meet reporting needs. But a word of caution: Partitioning comes with its own complexities, and Partitioning and @XREF (depending on how implemented) could cause slower retrievals, so weigh the pros and cons first.

Of course the brief notes above only scratch the surface. For more information, or for help to deploy ideas like these to speed up your cubes, give us a call.

Unblock your data blocks!

 

Dave Stark is founder & president of Analysis Team, Inc. (www.AnalysisTeam.com) and an Oracle Essbase 11 Certified Implementation Specialist.

 

Read other articles in this series:

Part 1 :  Speed Up Your Essbase: Optimizing Solution Design

Part 3 :  Speed Up Your Essbase: Optimizing Database and Calculation Settings

 

Leave a Reply