Professional Documents
Culture Documents
The documentation may have changed since you downloaded the PDF. You can always find the latest information on SAP Help
Portal.
Note
This PDF document contains the selected topic and its subtopics (max. 150) in the selected structure. Subtopics from other structures are not included.
The selected structure has more than 150 subtopics. This download contains only the first 150 subtopics. You can manually download the missing
subtopics.
2016 SAP SE or an SAP affiliate company. All rights reserved. No part of this publication may be reproduced or transmitted in any form or for any purpose
without the express permission of SAP SE. The information contained herein may be changed without prior notice. Some software products marketed by SAP
SE and its distributors contain proprietary software components of other software vendors. National product specifications may vary. These materials are
provided by SAP SE and its affiliated companies ("SAP Group") for informational purposes only, without representation or warranty of any kind, and SAP
Group shall not be liable for errors or omissions with respect to the materials. The only warranties for SAP Group products and services are those that are set
forth in the express warranty statements accompanying such products and services, if any. Nothing herein should be construed as constituting an additional
warranty. SAP and other SAP products and services mentioned herein as well as their respective logos are trademarks or registered trademarks of SAP SE in
Germany and other countries. Please see www.sap.com/corporate-en/legal/copyright/index.epx#trademark for additional trademark information and notices.
Table of content
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 1 of 137
Table of content
1 OLAP
1.1 Special OLAP Functions and Services
1.1.1 Aggregation
1.1.1.1 Examples in the Data Warehousing Workbench
1.1.1.1.1 Examples of Exception Aggregation: Last Value and Average
1.1.1.1.2 Examples of Exception Aggregation: Average (AV1)
1.1.1.2 Examples in the BEx Query Designer
1.1.1.2.1 Example of Exception Aggregation: Counting
1.1.1.2.2 Example of Exception Aggregation: Enhanced Counting
1.1.2 Hierarchies
1.1.2.1 Options for Hierarchical Modeling
1.1.2.2 Hierarchy Nodes
1.1.2.2.1 Link Nodes
1.1.2.3 Loading Hierarchies
1.1.2.3.1 Loading Hierarchies Using a Process Chain
1.1.2.3.2 Special Features when Loading using the PSA
1.1.2.3.3 Loading Data as Subtrees
1.1.2.4 Creating Hierarchies
1.1.2.4.1 Modeling Nodes and Leaves
1.1.2.5 Editing Hierarchies
1.1.2.5.1 Functions of Hierarchy Processing
1.1.2.5.2 Level Maintenance
1.1.2.5.3 Hierarchy Attributes
1.1.2.6 Activating Virtual Time Hierarchies
1.1.2.7 Hierarchy Properties
1.1.2.7.1 Hierarchy Version
1.1.2.7.1.1 Maintaining Hierarchy Versions
1.1.2.7.2 Time-Dependent Hierarchies
1.1.2.7.2.1 Time-Dependent Hierarchy Structures in the Query
1.1.2.7.2.2 Loading Time-Dependent Hierarchies
1.1.2.7.3 Intervals
1.1.2.7.4 Sign Reversal
1.1.2.7.4.1 Using Sign Reversal
1.1.3 Elimination of Internal Business Volume
1.1.4 Currency Translation
1.1.4.1 Scenarios for Currency Translation
1.1.4.2 Currency Translation Type
1.1.4.2.1 Defining Target Currencies Using InfoObjects
1.1.4.2.2 Creating Variables for Currency Translation Types
1.1.4.2.3 Creating Currency Translation Types
1.1.4.2.4 Transferring Exchange Rates for Currencies from SAP Systems
1.1.4.2.5 Transferring Global Table Entries for Currencies from SAP System
1.1.4.2.6 Exchange Rates for Currencies in Flat Files
1.1.4.2.6.1 Uploading Exchange Rates from Flat Files
1.1.4.3 Currency Translation During Transformation
1.1.4.4 Currency Translation in the Business Explorer
1.1.4.4.1 Setting Variable Target Currency in the Query Designer
1.1.4.4.2 Multiple Currency Translation Types in One Query
1.1.4.5 Currency and Unit Display in Business Explorer
1.1.5 Quantity Conversion
1.1.5.1 General Information About Quantity Conversion
1.1.5.2 Prerequisites for InfoObject-Specific Quantity Conversion
1.1.5.3 Quantity Conversion Types
1.1.5.3.1 Defining Target Units of Measure Using InfoObjects
1.1.5.3.2 Creating Variables for Quantity Conversion Types
1.1.5.3.3 Creating Quantity Conversion Types
1.1.5.3.4 Transferring Global Table Entries for Units of Measure from SAP
1.1.5.4 Quantity Conversion During the Transformation
1.1.5.5 Quantity Conversion in the Business Explorer
1.1.5.5.1 Setting Variable Target Units of Measure in the Query Designer
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 2 of 137
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 3 of 137
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 4 of 137
1 OLAP
Purpose
SAP NetWeaver Business Intelligence uses OLAP technology to analyze the data that is stored in the data warehouse. Online Analytical Processing (OLAP)
characterizes business intelligence as a Decision Support System since it allows decision makers to analyze multidimensionally modeled data quickly and
interactively in accordance with business management needs.
InfoProviders provide the view of the data. Because the data in InfoCubes is stored in a read-optimized form, InfoCubes and MultiProviders based on
InfoCubes are the preferred InfoProvider.
Integration
The OLAP Processor, a component of the BI server, lies between the user and the database: It makes the multi-dimensionally formatted data available to
both the BI front end and, using special interfaces (Open Analysis Interfaces) to third-party administrator front ends too. For this reason, the OLAP processor
is optimized for the analysis and reporting of very large datasets. Users can request ad hoc individual views of business-relevant data using the Business
Explorer (see BI Suite: Business Explorer).
The following graphic displays the status and tasks of the OLAP processor within the data processing process when a multidimensional analysis is executed:
Queries are the basis of every analysis in SAP NetWeaver Business Intelligence. To formally define a multidimensional request, a query determines:
the structure analog to a worksheet (see Structures, Defining Restricted Key Figures, Defining Calculated Key Figures, Defining Exception Cells).
the filter that affects this structure
the navigation space (free characteristics) (see Restricting Characteristics).
The BI system has a number of analysis and navigation functions for formatting and evaluating a companys data. These allow the user to formulate individual
requests on the basis of multi-dimensionally modeled datasets (InfoProviders). Subsequently the user is able to view and evaluate this data from different
perspectives at runtime. The overall functionality for retrieving, processing and formatting this data is provided by the OLAP processor.
In the context of BI Integrated Planning, you can use input ready queries for manual planning. For more information, see BI Integrated Planning and Input
Ready Queries.
Features
The following table offers an overview of the OLAP functions and services implemented in the analytic engine of SAP NetWeaver Business Intelligence.
For more information, see Special OLAP Functions and Services, Performance Optimization and the BI Suite: Business Explorer section.
OLAP functions and services: an overview
OLAP Function
Operations in Detail
Navigation
Filtering
Aggregation
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 5 of 137
Layout
Structuring
Hierarchical assignment of characteristic values with drill down for more than one
element (Universal Display Hierarchy)
Non-cumulatives
Aggregates
OLAP Cache (can be implemented in cache mode, depending on the query)
Features
The following sections describe some of the special OLAP functions of BI in detail:
To be able to calculate the values of key figures, the OLAP Engine aggregates the data of the InfoProvider to the detail level of the query at aggregation.
More information: Aggregation
You can use hierarchies to model values of a characteristic, characteristic attributes, and dimensions of an InfoCube in structured form.
More information: Hierarchies
You can use the Elimination of Internal Business Volume function to eliminate sales volumes relating to movements between two cost centers within
the company when executing a BEx query.
More information: Elimination of Internal Business Volume
You can use currency translation to translate the key figures with currency fields that exist in the source system in different currencies into a standard
currency in the BI system (for example, the local currency or company currency).
More information: Currency Translation
Quantity conversion allows you to convert key figures with units that have different units of measure in the source system into a uniform unit of measure
in the BI system.
More information: Quantity Conversion
Local calculations enable the recalculation of single values and results in accordance with specific criteria. For example, you can create ranked lists or
you can calculate the total for a Top 10 product list locally in the executed query.
More information: Local Calculations
You can use the Constant Selection function to ensure that a defined selection in the Query Designer cannot be changed by navigation and filtering in the
executed query. This allows you to select reference sizes that do not change at runtime.
More information: Constant Selection
Analysis authorizations are required by all users that want to display data from authorization-relevant characteristics or navigation attributes in a query.
More information: Analysis Authorizations
The report-report interface allows you the flexibility to call a jump target (receiver) online from a BEx query (sender) within or outside of the BI system.
Queries, transactions, reports, and Web addresses can be jump targets.
More information: Report-Report Interface
With SAP NetWeaver BI you can run various analysis scenarios. You can find the relevant information in the following sections:
You create slow-moving item lists to enable the display of characteristic values for which no data is available in the InfoProvider. This allows you to
determine characteristic values with NULL values.
More information: Example: List of Slow-Moving Items
The suppression of zero rows and columns allows you to hide rows and columns that contain only zero values (0.00).
More information: Suppression of Zero Rows and Zero Columns
You can use the Constant Selection function to generate a market index for displaying the sales volume values of products in relation to a product
group, for example.
More information: Example: Market Index
Detailed example scenarios are available in the Data Warehouse for selected OLAP functions. These scenarios already contain all objects (InfoProviders with
master data and transaction data and queries) that you need to execute the functions (for example, Elimination of Internal Business Volume) for example
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 6 of 137
purposes.
More information: SAP DemoContent for Features
1.1.1 Aggregation
Use
To enable the calculation of key figures, the data from the InfoProvider has to be aggregated to the detail level of the query and formulas may also need to be
calculated. The system has to aggregate using multiple characteristics. In regard to a selected characteristic, the system can aggregate with another rule for
each key figure (exception aggregation).
Features
During aggregation, the OLAP Engine in BI proceeds as follows:
1. First, standard aggregation is executed. Possible aggregation types include summation (SUM), minimum (MIN) and maximum (MAX). Minimum and
maximum can, for example, be used for date key figures.
2. Aggregation using a selected characteristic occurs after standard aggregation (exception aggregation). The available exception aggregation types include
average, counter, first value, last value, minimum, maximum, no aggregation, standard deviation, summation, and variance. Application cases for
exception aggregation include warehouse stock, for example, that cannot be totaled over time, or counters that count the number of characteristic values
for a certain characteristic. See also: Examples in the Data Warehousing Workbench.
3. Lastly, aggregation using currencies and units is executed. A * is output when two numbers that are not equal to zero are aggregated with different
currencies or units. See also: Currency Translation.
Formulas are calculated once the numbers have been completely aggregated. There are three exceptions to this rule:
For the key figure, a calculation time was selected in which the calculation of the formula is to be done before aggregation. See the section Tab Page
Aggregation in Selection/Formula Properties.
When using a formula variable with replacement from an attribute value in a calculated key figure. See the section Tab Page Aggregation in
Selection/Formula Properties.
During a currency translation that was set up in the Query Designer. See Currency Translation in the Business Explorer.
The sequence in which the values are calculated affects the result of the query. See also the example for the interpretation of query results.
The aggregation types are set during the definition of the key figure. See also: Tab Page: Aggregation. This section contains a detailed description of the
aggregation types.
You can override the aggregation settings using settings in the query (in the Query Designer, BEx Web applications, and the BEx Analyzer). See also:
Calculate Results As and Calculate Single Values As (local aggregation).
You can define the aggregation behavior for formulas and calculated key figures in the Query Designer. See also: Examples in the BEx Query Designer.
Type of Appraisal
Year
4711
Potential assessment
2002
4711
Goal achievement
2002
4711
Performance appraisal
2002
4712
Potential assessment
2002
4712
Goal achievement
2002
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 7 of 137
Employee 4711 would be counted twice because the different appraisal types are added together, which would make the result incorrect. To avoid this, the key
figure No. of Performance Appraisals has an exception aggregation (for example, Average (Values Unequal to Zero) ) for the characteristic Appraisal Type .
Then the No. of Performance Appraisals is not totaled.
The query then appears as follows:
To Date
Value
20050618
20050620
20050620
20050625
10
20050625
20050629
20050629
20050630
27
27
20050630
20050708
44
352
20050708
20050807
30
30
20050807
20050807
Sum
51
79
433
Multiplied by Value
To Date
Value
200506
200506
30
90
200506
200506
30
60
200506
200506
30
60
200506
200506
30
27
810
200506
200507
61
44
2684
200507
200508
62
62
200508
200508
31
Sum
274
79
3766
Multiplied by Value
Customer
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Sales Volume
Page 8 of 137
USD
NY
CA
400,000
200,000
50,000
800,000
300,000
You want to use the query to determine the number of customers for which the sales volume is less than 1,000,000 USD. To do so, you create the calculated
key figure Customer sales volume <= 1.000.000 (F1) with the following properties:
General tab page: Formula definition: Sales Volume <= 1.000.000
Aggregation tab page: Exception Aggregation: Total, Ref. Characteristic: Customer
This query would deliver the following result:
Region
Customer
Sales Volume
F1
USD
NY
CA
400,000
200,000
50,000
Result
650,000
800,000
300,000
Result
1,100,000
1,750,000
Overall result
The overall result of the calculated key figure F1 is calculated as follows: Sales volume of customer A (400,000 + 800,000) -> does not fulfill
condition (sales volume <= 1,000,000) -> 0: sales volume of customer B (200,000) -> fulfills condition -> 1; sales volume of customer C
(50,000 + 300,000) -> fulfills condition -> 1. When totaled, this gives 2 as the overall result for F1.
A query with a drilldown by region would give the following result:
Region
Sales Volume
F1
USD
NY
650,000
CA
1,100,000
Overall result
1,750,000
Due to the assignment of the reference characteristic Customer to the calculated key figure F1 for the exception aggregation, the query also
delivers the required data without a drilldown by reference characteristic.
Region
Customer
Sales Volume
US
NY
400,000
200,000
50,000
800,000
300,000
USD
CA
In the query, you do not want to determine the number of customers for which the sales volume totals less than 1,000,000 USD (see Example of Exception
Aggregation: Counting) - you want to determine the following values instead:
Number of customers with sales volume between 100,000 and 1,000,000
Number of customers with sales volume between 100,000 and 1,000,000 in at least one region
To be able to calculate these values, you have to create the following calculated key figures:
F1: Customers with sales volume <= 1,000,000
General tab page: Formula definition: Sales Volume <= 1,000,000
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 9 of 137
Region
Customer
Sales Volume
F1
F2
F3
USD
US
NY
CA
400,000
200,000
50,000
Result
650,000
800,000
300,000
Result
1,100,000
1,750,000
Overall result
The overall result of the calculated key figure F3 is clarified by the table below:
Customer
Country
Region
Sales Volume
F1
F2
F3
S1
USD
A
US
US
US
NY
400,000
CA
800,000
Result
1,200,000
NY
200,000
Result
200,000
NY
50,000
CA
300,000
Result
350,000
1,750,000
Overall result
1.1.2 Hierarchies
Purpose
In SAP NetWeaver Business Intelligence there are different options for modeling hierarchical structures. The following table gives you an overview:
Hierarchy Type
Description
2003
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 10 of 137
The most appropriate type of modeling depends on the individual case. The modeling types differ, for example, with regard to how data is stored in the source
system, performance, authorization checks, the use of aggregates, or whether parameters can be set for the query. You can find a comparison of the three
types of hierarchy modeling under Options for Hierarchical Modeling.
In addition, you can organize the elements of a structure hierarchically in query definition. Hierarchies for characteristics are not required for
these functions.
You can display one or both axes (rows and columns) as a hierarchy.
For more information about the Structures with Hierarchical Display and Display as Hierarchy ( Universal Display Hierarchy ) functions for
the BEx Analyzer, see Working with Hierarchies.
You can use the Display as Hierarchy function to display the different ways of modeling hierarchical structures (named in the table above) in
essentially the same format in the query.
The following sections deal exclusively with the maintenance of characteristic hierarchies.
Avoid grouping properties that belong to semantically different dimensions such as Time, Product, Customer, and Countries in one
artificial characteristic.
Example: Distribution Channel and Product Group in artificial characteristic Profit Center .
If you want to define characteristic hierarchies to use in reporting to display different views, we do not recommend that you code separate
dimensions in one artificial characteristic. This has the following drawbacks:
The hierarchies are unnecessarily large. System performance is negatively affected during reporting. (A hierarchy can include at most
50,000 100,000 leaves, see Creating Hierarchiessap.)
Comprehensive change runs occur too frequently.
Characteristic hierarchies also allow you to create queries for reporting in several ways. In the Query Designer you can use characteristic hierarchies in the
following ways:
As a display hierarchy for a characteristic, if this needs to be displayed as a hierarchy (see Characteristic Propertiessa)
As a selection for specific characteristic values, if a characteristic needs to be restricted to a hierarchy or to hierarchy nodes (see Restricting
Characteristics: Hierarchiessa)
Integration
In InfoObject maintenance you have to specify whether a characteristic can have hierarchies (see Creating InfoObjects: Characteristic). A characteristic of
this type is called a hierarchy basic characteristic.
Hierarchies have to (and can only) be created for hierarchy basic characteristics. All characteristics that reference a hierarchy basic characteristic
automatically inherit the corresponding hierarchies. A hierarchy basic characteristic can have as many hierarchies as required.
Features
Loading hierarchies
You can load characteristic hierarchies into your BI system:
From a source system
Using the scheduler in the Data Warehousing Workbench (see Loading Hierarchiess)
Using a process chain (see Loading Hierarchies Using Process Chainssa)
From or into another BI system
Using the data mart interface. You cannot transport hierarchies into another BI system because hierarchies are data (see Using the Data Mart
Interfacesap).
Creating hierarchies
You can create characteristic hierarchies in your BI system.
Note that the size of the hierarchy influences performance (see Creating Hierarchies).
Editing hierarchies
In hierarchy maintenance for the BI system, you can change, copy, delete, or set reporting-relevant hierarchy attributes in characteristic hierarchies (see
Editing Hierarchies).
Activating hierarchies
To use hierarchies in reporting, you have to activate your hierarchies (see Editing Hierarchies).
The BI system predefines all useful hierarchies for time characteristics. Activate a useful selection from the complete list of proposals (see Activating Virtual
Time Hierarchies).
Modifying hierarchy changes
When you create or load new hierarchies you also need to create the corresponding aggregates. When you change a hierarchy or activate a changed
hierarchy, all existing aggregates are modified automatically (see System Response Upon Changes to Master Data and Hierarchies).
The system likewise modifies BI accelerator indexes during hierarchy and attribute change runs (see System Response Upon Changes to Data: BI
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 11 of 137
Accelerator Index).
DataStore Objects
As Posted view is not possible.
are possible.
Cross-InfoProvider
Changeable
addition of attributes).
Duplicate leaves are only possible in As Posted view. Duplicate leaves are not possible.
The following sections deal exclusively with the maintenance of characteristic hierarchies.
Example
The following graphic shows a sales hierarchy as a typical example of a characteristic hierarchy in the BEx Analyzer.
The following graphic shows a hierarchical structure modeled in the dimensions of an InfoCube in the BEx Analyzer.
Using the Display as Hierarchy (Universal Display Hierarchy) function, you can also display the modeling as a tree in the query, like the characteristic
hierarchy shown above (see Working with Hierarchies).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 12 of 137
Structure
Special hierarchy nodes
Hierarchy Nodes
Description
Root (nodes)
A node that is not assigned under any node and has no parent node (predecessor).
A hierarchy can have more than one root node.
Leaf
A node without lower-level nodes (successors). Leaves are postable, but are not
postable nodes (see table below Postability of Nodes).
Leaves are always characteristic values for the hierarchy basic characteristic.
Value specification: The value is moved from the InfoProvider.
Interval
Inner nodes
A quantity of leaves that are indicated by their lower and upper limit.
A node having successors, meaning all nodes except for leaves.
Description
Hierarchy level
Subtree
A subtree includes a node (root node of the subtree) with its lower-level or
subnodes.
Nodes that are on the border of a subtree are called border nodes. This is important
for hierarchy authorizations (see Maintaining Authorizations for Hierarchies).
Postability of nodes
Postability
Description
Postable nodes
A node that does not refer to a hierarchy basic characteristic and is not a postable
node (see Text Nodes, External Characteristic Nodes ).
Value specification: The value of a node that is not postable is specified by the
aggregation of the values of its children nodes.
Text nodes
A text node is a new, artificial term. Text nodes are special characteristic nodes
for the artificial characteristic 0HIER_NODE.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 13 of 137
The system automatically creates a root node REST_H, under which all
characteristic values hang. These characteristic values exist in the master data, but
are not explicitly arranged in the hierarchy.
The node Not Assigned guarantees that no data is lost when activating a
presentation hierarchy (see Characteristic Properties). In the query, this node is
always collapsed first and also does not react to Expand to Level . However, it can
be explicitly opened.
Hierarchy balance
Hierarchy type
Description
Unbalanced
Balanced
A hierarchy whose leaves have all the same depth is called a balanced hierarchy.
A balanced hierarchy, in which all nodes for a level have the same semantics (such
as characteristic nodes with the same characteristic), is called a Named Level
Hierarchy (see example 1 below). In the level maintenance you can assign the
levels for the respective texts (see Level Maintenance).
Typical examples of these hierarchies are geographic hierarchies, for example, with
the levels Continent Country State Region City or time hierarchies (see the
example below).
SAP NetWeaver Business Intelligence can process both balanced as well as unbalanced hierarchies without restriction.
Example
Example 1
The following graphic gives an example of a hierarchy for InfoObject Month 0CALMONTH to illustrate the relationships between the hierarchy nodes and their
grouping in hierarchy levels.
This time hierarchy is a typical example of a balanced hierarchy. It has several root nodes because the nodes with characteristic values 2002 and 2003 for the
InfoObjects transferred in addition to the time characteristic Year 0CALYEAR ( external characteristic node ), do not commonly hang under a special parent
node (like a text node Year ).
The hierarchy has three levels. Since each level corresponds to an InfoObject (0CALYEAR, 0CALQUARTER, 0CALMONTH), it concerns a Named Level
Hierarchy .
Postable nodes are green, while nodes that cannot be posted are displayed in yellow. The definition of the nodes 1.2002/0CALQUARTER and
1.2003/0CALQUARTER are equivalent. The specification of intervals as a summary of several leaves only serves as a more comfortable entry, but is not a
new structure component.
Example 2a
The following graphic gives an example of a hierarchy for InfoObject Customer to illustrate the relationships between the hierarchy nodes and their grouping in
hierarchy levels. This customer hierarchy is a typical example of an unbalanced hierarchy and only has one root node. Postable nodes and leaves are green,
while nodes that cannot be posted are displayed in yellow.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 14 of 137
Example 2b
The following graphic gives an example of a hierarchy for the InfoObject Customer to illustrate the display of such a hierarchy for the time of your modeling in
the hierarchy maintenance. The different node types are displayed as follows:
Folder symbol: Text nodes, in this case the root node Customer Hierarchy
Yellow InfoObject symbol: Nodes that cannot be posted with characteristic values for the additionally transferred InfoObjects Region ( External
Characteristic Nodes ) in the customer hierarchy
Green InfoObject symbol: Postable nodes and leaves for the InfoObject Customer
See also:
Link Nodes
Loading Data as Subtrees
Creating Hierarchies
Modeling Nodes and Leaves
Editing Hierarchies
Hierarchy Properties
Use
With a link node you can hang the subtree that is under the corresponding, original node several times in a hierarchy.
Structure
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 15 of 137
The link node has the same name (NODENAME) as the original node externally, while internally it keeps another ID (node ID).
The link node succeeds all properties and subtrees for the original node. It has no children of its own. The children transferred from the original node are not
displayed in the hierarchy maintenance.
Link and original nodes differ with regard to their parent nodes and neighboring nodes.
These special features are expressed by the indicator that is set with the link node.
Integration
Loading link nodes
The link node has its own NODEIDand PARENTID when loading. The CHILDID has to be blank. NODENAME, INFOOBJECT, and the time intervals (fields of the
from-to date), if necessary, are identical to the values for the original node. The link indicator must be set.
You can find additional information using the named fields under Downloading Hierarchies from Flat Files.
Example
The following graphic illustrates how a specific, customer organization hangs both under the Sales Organization New York as well as under the Sales
Organization California .
In the hierarchy maintenance, only the link node is displayed as a reference to the original. The arrow clarifies this relationship. In the query, the children nodes
transferred from the original node are contrasted here in white.
The value is not counted twice for the overall total Sales Organization USA.
See also:
Furthermore, you can load hierarchies using the data mart interface from another BI system. See Hierarchies and Using the Data Mart
Interface.
Prerequisites
In InfoObject maintenance, the indicator With Hierarchies is set for the hierarchy basic characteristic. This means that the characteristic can have
hierarchies.
If you load a hierarchy, you must have selected the permitted characteristics in the InfoObject maintenance. See Tab Page: Hierarchy in the InfoObject
maintenance.
To load hierarchies from external systems, you have to edit the metadata in the transfer structure maintenance. See Uploading Hierarchies from Flat Files.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 16 of 137
Procedure
1. In the Data Warehousing Workbench under Modeling , select the InfoSource tree.
2. Select the InfoSource (with direct update) for the InfoObject, to which you want to load the hierarchy.
3. Choose Additional Functions Create Transfer Rules from the context menu of the hierarchy table object for the InfoObject. The Assign Source
System dialog box appears.
4. Select the source system from which the hierarchy is to be loaded. The InfoSource maintenance screen appears.
If the DataSource only supports the transfer method IDoc, then only the transfer structure is displayed (tab page DataSource/Transfer Structure ).
If the DataSource also supports transfer method PSA, you can maintain the transfer rules (tab page Transfer Rules ).
5.
6.
7.
8.
If it is possible and useful, we recommend that you use the transfer method PSA and set the indicator Expand Leaf Values and Node
InfoObjects . You can then also load hierarchies with characteristics whose node name has a length >32.
Save your entries and go back. The InfoSource tree for the Data Warehousing Workbench is displayed.
Choose Create InfoPackage from the context menu (see Maintaining InfoPackages). The Create InfoPackage dialog box appears.
Enter the description for the InfoPackage. Select the DataSource (data element Hierarchies ) that you require and confirm your entries.
On the Tab Page: Hierarchy Selection, select the hierarchy that you want to load into your BI system.
Specify if the hierarchy should be automatically activated after loading or be marked for activation.
Select an update method ( Full Update, Insert Subtree, Update Subtree ).
When you upload hierarchies, the system carries out a consistency check, making sure that the hierarchy structure is correct. Error messages
are logged in the Monitor. You can get technical details about the error and how to correct it in the long text for the respective message.
Result
The hierarchy structure and the node texts or intervals are loaded. The structure information and the hierarchy texts are stored in the BI system. You can edit
the hierarchy.
In order to be able to use the hierarchy in reporting, the hierarchy has to be activated. If you have not selected the indicator for Loading Hierarchyand Note
for Activation or Activate in the InfoPackage, you can activate the hierarchy later (see Editing Hierarchies).
If there are aggregates for a hierarchy and the hierarchy is marked for activation, it is activated after the next change run.
Prerequisites
You have already created an InfoPackage for loading your hierarchy (see Loading Hierarchies)..
Procedure
You can load a hierarchy into a process chain in the following ways:
You can create your process chain from the InfoPackage maintenance by choosing
Process Chain Maintenance . The system takes you step by step
through the creation of the process chain.
You can call process chain maintenance directly from the SAP Easy Access Menu : Choose Administration Process Chains . Follow these steps:
1. In the Data Warehousing Workbench symbol bar, choose
screen appears.
Process Types in the left-hand area of the screen. The system displays the available process
4. In the process category Loading Process and Postprocessing , choose the application process type
Execute InfoPackage .
5. Insert the Execute InfoPackage application process type with drag and drop into the process chain. The dialog box for inserting process variants
appears.
6. Use the input help to select the InfoPackage that you want to include in the process chain.
7. Confirm your entries. Add the processes
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 17 of 137
You can specify multiple InfoPackages in this process variant so that multiple hierarchies can be saved with one variant. However, the
sequence specified in the variant is not maintained here. If you do want to keep the sequence, for example when saving hierarchies as a
subtree, you need to insert a Save Hierarchy process after each hierarchy loading process. These Save Hierarchy processes have to be
saved serially, one after the other.
Hierarchy-specific processes
Process
Save hierarchy
Information
This process always has to be included in a process chain, by which a hierarchy is
loaded. If the Saving Hierarchies process is missing or is not used, the
InfoPackage goes nowhere. This means that the hierarchy is not saved in the BI
system.
Set the indicator for Activate Hierarchies After Loading or Note for Activation if the
hierarchy needs to be automatically saved after the load and activated. The
respective option in the InfoPackage is not sufficient, because it is only used if the
hierarchy loading process (manual) is scheduled using this InfoPackage (see
Loading Hierarchies).
If you do not set the indicator for activating the hierarchy in the process Do Not
Save Hierarchy , only a modified version of the hierarchy (M version) is saved in the
BI system. The hierarchy, however, is not directly activated.
Change Run
If you set the indicator for Activate Hierarchies after Loading or Note for
Activation in the Save Hierarchy process and have adopted the Change Run
process in the chain, the hierarchy is activated by the change run.
If you are not using any aggregates, you can delete this process from the process
chain.
Result
You have included your hierarchy loading process in a process chain.
Example
The following graphic illustrates an example of how a process chain is used to load a hierarchy.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 18 of 137
Prerequisites
A hierarchy DataSource from a SAP source system must support loading by using a PSA.
Functions
Error treatment
By storing hierarchies in PSA tables, the system allows you to make manual corrections when errors arise.
The system generates up to 5 PSA tables for a hierarchy DataSource. These tables contain the following hierarchy segments:
Hierarchy header
Texts for hierarchy header
Hierarchy node
Node texts
Intervals
You only need the interval segment when the DataSource supports intervals in hierarchies and this is indicated in InfoObject maintenance.
If you are editing the hierarchy in the PSA, you can select a hierarchy segment in the selection dialog. The system displays each segment individually in a
table.
Flexible attributes for the hierarchy header and hierarchy node
Attributes for the hierarchy header are settings that are valid for the display and processing of the entire hierarchy in the query. You can find additional
information in the table Functions for Displaying and Processing Hierarchies in a BEx Query under Functions of Hierarchy Processing.
Attributes for hierarchy nodes are hierarchy attributes that are selected for the hierarchy basic characteristic in InfoObject maintenance and which are valid
for all hierarchies for this characteristic. See Hierarchy Properties and Tab Page: Hierarchy.
Checking permitted node characteristics
When transferring hierarchies via the RFC, the system checks for which characteristics characteristic values with hierarchy nodes are allowed to be identified.
The permitted InfoObjects must be selected via External Characteristics in Hierarchies.
If no InfoObject is selected, only text nodes (for the artificial characteristic) are allowed as inner nodes.
All selected InfoObjects are included together with the characteristics compounded to them in the communication structure for hierarchy nodes.
Communication structure and transfer rules for hierarchies
If a hierarchy structure is loaded via the transfer method PSA, you can then define the transfer rules. The result is the same flexible transformation options as
is the case for transaction data, attributes and texts, excluding the fact that it is not possible to create a start routine.
In the InfoSource maintenance, there are the following views of the transfer rules:
View for the hierarchy header segment
View for the hierarchy node segment
For the hierarchy header, you can set the properties of the hierarchy header and the properties of the file structure via Hierarchy Structure.
In the following example, the field KOKRS is assigned to the field Controlling Area (0CO_AREA) and the field KOSTL is assigned to the field
Cost Center (0COSTCENTER) using the transfer rules. The alpha conversion routine is executed for KOKRS. The node has the hierarchy
property Reversing the Sign (0SIGNCH). This is also uploaded. As InfoObject 0COSTCENTER is compounded to InfoObject 0CO_AREA,
the characteristic values in NODENAME are saved added to one another.
See also:
Structure of a Flat Hierarchy File for Loading Using a PSA
Page 19 of 137
After loading a hierarchy, you can store it as a subtree, provided that a hierarchy already exists under the specified technical name in the BI system and this
target hierarchy contains the root nodes of the subtree hierarchy that you want to load.
You use subtree hierarchies, for example, to combine hierarchies from various source systems in a single BI system.
Prerequisites
1. Every subtree hierarchy must have the same technical name as the target hierarchy. Where necessary, you have renamed the subtree hierarchy in
accordance with the load process (see Tab Page Select Hierarchy).
The loaded hierarchy is only saved as a subtree if a hierarchy for the hierarchy basic characteristic already exists in BI under the specified key (the key
consists of the technical name of the hierarchy, a to-date, and a hierarchy version). By selecting the Subtree Insert or the Subtree Update option, you
tell the BI system to include the loaded hierarchy in a target hierarchy with the same technical name.
2. If you want to include a hierarchy as a subtree in a target hierarchy, the root node of the subtree hierarchy must be included as a node in the target
hierarchy. It must also have the same technical properties as this target hierarchy node. In both the target hierarchy and subtree hierarchy this interface
node relates to the same InfoObject. It has the same technical name in the target hierarchy as it has in the subtree hierarchy and has the same to-date
when the variables are time dependent.
3. The target hierarchy must not contain any additional subtree hierarchy nodes, unless it satisfies the prerequisites of a valid double-node in the new
complete hierarchy.
Features
When you load a hierarchy and execute a subtree insert, the hierarchy is included as a subtree in an existing hierarchy, without the system deleting any
nodes from the target hierarchy.
If a subtree is inserted a second time, each subtree hierarchy node under the interface node of the target hierarchy is duplicated, causing the
loading process to terminate.
When you load a hierarchy and execute a subtree update, the hierarchy is included as a subtree in an existing hierarchy. The system replaces the old subtree
with the new one.
If a subtree update is executed again, all the nodes under the interface node in the target hierarchy are deleted before the new subtree is
inserted.
Activities
In the scheduler, on the Select Hierarchy tab page, carry out the following steps to store a hierarchy as a subtree in BI.
1. After loading, change the technical name of the subtree hierarchy into the technical name of the target hierarchy. Choose Rename Hierarchy After
Loading and enter the technical name.
2. Choose the Subtree Insert update method if you want to include the hierarchy as a subtree in an existing hierarchy, without the system deleting any
nodes from the target hierarchy.
or
Choose the Subtree Update update method, if you want to include the hierarchy as a subtree in an existing hierarchy and you want the system to delete
the old subtree and replace it with the new one.
Example
Example 1
The scenario, on which this example is based, is that the hierarchies for InfoObject 0CUST_SALES have text nodes as root nodes and these are represented
by the InfoObject 0HIER_NODE with the characteristic value ~ROOT. Otherwise, the hierarchies would consist only of nodes that can be posted to from the
InfoObject 0CUST_SALES
If the InfoObject 0SOURCESYSTEM is compounded with 0CUST_SALES, you can combine hierarchies from heterogeneous SAP systems into a single
hierarchy in BI. If a hierarchy is uploaded, the InfoObject 0SOURCESYSTEM is filled automatically with the corresponding source system ID. If identical
nodes arrive from different source systems, the compounding with the 0SOURCESYSTEM prevents the nodes being duplicated in the target hierarchy. In this
example, the interface node is the root node of the 0CUST_SALES hierarchy.
In the InfoPackage for the corresponding source system, you select the name of the hierarchy that you want to load. In this example, the same technical
name is always used in the various source systems and suggested by the system as the default technical name. On the Select Hierarchy tab page in the
scheduler, specify that you want the uploaded hierarchies to always be given the same technical name in BI, if this is not already the case. You can also set
the update method here.
If, after a full update, the subtree insert option is used to store all additional hierarchies as subtrees of one of these hierarchies, you end up with a hierarchy
with a root node, under which hang all the nodes that used to hang under the root nodes of the hierarchies specific to the source system.
If you use the subtree update option, you get a target hierarchy containing only the most recently loaded subtree. This is because all the nodes under the
interface node are deleted before the new subtree is added.
Example 2
This example assumes that there are different SET hierarchies for the InfoObject 0COSTCENTER, such as for Europe, Mexico, and USA. It is also assumed
that the nodes of one hierarchy do not appear in any other hierarchy.
In BI, in the hierarchy maintenance screen, you create a WORLD hierarchy with three interface nodes belonging to the SET hierarchies mentioned above.On
the Select Hierarchy tab strip in the scheduler, you specify that, after loading, you want the three SET hierarchies to be stored as a subtree under the same
technical name as the WORLD hierarchy.
When the SET hierarchies are loaded, you get a hierarchy, under whose root nodes the three hierarchies Europe, Mexico and USA hang.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 20 of 137
When the SET hierarchies are loaded, you get a hierarchy, under whose root nodes the three hierarchies Europe, Mexico and USA hang.
If you use the subtree insert option in this example, the scenario must be recreated the next time you upload, otherwise, the hierarchies will be duplicated.
You recreate the scenario by deleting the nodes and subtrees that hang under the interface nodes, before you reload the data.
If you use the subtree update option, you are able to reload the individual hierarchies as many times as you like, because the old subtree is deleted each
time.
Procedure
You can create hierarchies in two ways; either from the SAP Easy Access Menu or hierarchy maintenance in the Data Warehousing Workbench.
On the SAP Easy Access screen, choose Modeling Master Data Maintenance Hierarchies . The Initial Screen Hierarchy Editing dialog box
appears.
You are in the Data Warehousing Workbench in the Modeling functional area. You can access hierarchy editing in the following ways:
In the InfoObject tree, you can choose Create Hierarchy from the context menu of the required InfoObject.
I n the Data Warehousing Workbench , you can call hierarchy maintenance in the symbol bar by choosing
Hierarchy Editing dialog box appears.
5.
6.
7.
8.
Fields used to enter the Validity (valid to, valid from) for the hierarchy property Total Hierarchy Time-Dependent
Fields used to specify the Hierarchy Version for the hierarchy property Hierarchies Version-Dependent.
Confirm your entries. The Maintain Hierarchy screen appears. You can define the structure of a hierarchy here.
To create a hierarchy node, you first need to choose an insertion mode:
Insert as First Child or
Insert As Next Neighbor (see Hierarchy
Editing Functions).
Choose the type of node you want to create: Text Node, Characteristic Node, <Hierarchy Basic Characteristic Node> or Interval (see Hierarchy
Nodes)
The system inserts the required node. The following symbols are used:
Nodes that cannot be posted
Text Nodes
Foreign Characteristic Nodes
Nodes that can be posted
< Hierarchy Basic Characteristic Node >
Intervals
You can insert additional nodes in the context menu for a node. You can also call editing functions for the selected node.
Repeat this procedure until the hierarchy structure has been set. For more information, see Modeling Nodes and Leaves.
A hierarchy can contain 50,000-100,000 leaves at most. If your hierarchy is larger, you should insert a level that is used as a navigation
attribute or preferably as a separate characteristic in the dimension table.
9. You can use Level Maintenance and Hierarchy Attributes to set how the hierarchy is to be displayed and processed in reporting (see Level
Maintenance and Hierarchy Attributes).
10. Save the hierarchy.
11. Activate the hierarchy. See Editing Hierarchies.
Access Using Edit Hierarchies
1. In the Data Warehousing Workbench symbol bar, choose
Edit Hierarchies . The Initial Screen: Hierarchy Editing dialog box appears with a list of all
the hierarchies in your BI system.
2. Restrict this list to the required hierarchy basic characteristic or select an existent hierarchy for it. (For more information about this dialog box, see Editing
Hierarchies.)
3. Choose
Create Hierarchies . The Create Hierarchy dialog box appears. The InfoObject name appears by default. The subsequent steps are the same
as those described above (4.-11.).
Result
The hierarchy is activated (
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 21 of 137
See also:
For both duplicate leaves and leaves in subtrees under link nodes, the values of the duplicate leaves are only considered once by the system
internally. When aggregating, the system automatically calculates what are called correction leaves for the superordinate node.
If a leaf Lo lies three times amongst the descendants of a node No , the value is added three times internally and then subtracted twice by the
correction node.
Postable Nodes
Starting point:
Change:
If an additional leaf L1 and/or a node L1 is added to the hierarchy node B , node N becomes a postable node N'. Leaf L lies under node N' , followed by L1 and
node N1 . Leaf L is not displayed in hierarchy maintenance, but is displayed in the query. Node N1 does not have a value: It is displayed in hierarchy
maintenance but not in the query.
A cost center hierarchy is a typical case that shows how useful this display behavior can be: If L is the cost center in question, the superior of B 1 und K 1 , it is
important to be aware of the costs that are directly posted to the cost center B and not to one of the subordinate cost centers.
Via Hierarchy Attributes you can set up that leaves such as Leaf L in our example- are not displayed in the BEx Query (see Hierarchy Attributes).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 22 of 137
Procedure
You can create hierarchies in two ways: Either from the SAP Easy Access Menu or in the Data Warehousing Workbench from hierarchy maintenance.
On the SAP Easy Access screen, choose Modeling Master Data Maintenance Hierarchies . The Initial Screen Hierarchy Editing dialog box
appears. For more information, see section Accessing from
Editing Hierarchies .
You are in the Data Warehousing Workbench in the Modeling functional area. You can access hierarchy editing in the following ways:
You can select an editing function in the context menu of a hierarchy in the InfoObject tree.
I n the Data Warehousing Workbench , you can call hierarchy maintenance in the tool bar by choosing
Edit Hierarchies .
Information
Display
Change
Copy
If several versions of the hierarchy are available, the Selection of Hierarchy Object
Version dialog box appears. Choose the variant you want to use.
The Save Hierarchy As dialog box appears. Enter a hierarchy name.
If the hierarchy is version or time dependent, enter the hierarchy version and the
validity date (see Hierarchy Properties).
You can enter descriptions (short, medium and long) for the hierarchy that you want
to copy.
Delete
Activate
whether you want to activate the hierarchy directly and thereby delete the
aggregate
whether you want preselect the hierarchy for activation.
In the latter case, you need to run the change run for this hierarchy. This updates all
the aggregates contained in the hierarchy and activates the hierarchy. (See System
Response Upon Changes to Master Data and Hierarchies)
Access from
Edit Hierarchies
Filter.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 23 of 137
Long description
4. Select the hierarchy you want to edit and then choose an editing function from the table above:
Hierarchy ,
The function
Copy Hierarchy ,
Change Hierarchy ,
Display Hierarchy ,
Delete
Activate Hierarchy .
See also:
Features
You can select the following functions from two toolbars in the hierarchy maintenance. The upper application toolbar has available functions that concern the
entire hierarchy. The lower application toolbar mainly offers functions for node maintenance:
Functions for the entire hierarchy
Function
Header Data
The dialog box Display Hierarchy Header or Change Hierarchy Header appears.
They contain additional information about technical properties for your hierarchy:
Hierarchy name
Description (short, medium, long)
Object version
Person responsible
Last changed by, change date, change time
Source system
Request no.
Hierarchy ID
You can switch between the display and change mode. This affects other functions.
Level Maintenance
You can specify free texts for hierarchy levels that are displayed in the BEx query
context menu (see Level Maintenance).
Hierarchy Attributes
You can make specifications for the display and processing of hierarchies in
reporting (see Hierarchy Attributes).
The source node is added directly under the target node as the first child.
The source node is added next to the target node on the same level.
The Drag&Drop relationship is also controlled with these insertion modes. If you want to hang several nodes at the same time with Drag&Drop,
choose the insertion mode you want, highlight the hierarchy nodes (if necessary, by pressing the Ctrl button), and drag the selected hierarchy
nodes with Drag&Drop to the place you want.
Displaying inactive/active versions
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 24 of 137
Function
In Display Mode, you can switch between the active and modified version of a hierarchy. If there is only one version, or if you are in the
Change Mode , this function is inactive.
Jumping to other maintenance transactions
Function
You arrive at the master data maintenance for the hierarchy basic characteristic.
There you can create new data, for example, that you can subsequently add to the
hierarchy.
Edit InfoObjects
Monitor
The monitor for the Data Warehousing Workbench appears, directly at the last
request loaded.
Detail
Create and Delete Node
Text Nodes
See Text Nodes in the Postability of Nodes table under Hierarchy Nodes.
Characteristic Nodes
See Postable Nodes in the Postability of Nodes table under Hierarchy Nodes.
Intervals
See Interval in the Special Hierarchy Nodes table under Hierarchy Nodes. For
more information, see Intervals
Delete Nodes
You can delete both individual as well as several nodes. To delete several nodes at
the same time, select these (if necessary, by pressing the Ctrl button). The system
deletes those nodes and that part of the subtree.
Collapse Subtree
Goto Node ID
The dialog box Selection of Node with Node ID appears. Specify the node ID for
the node to which you want to go. The corresponding node is selected in the tree.
See also:
Creating Hierarchies
Modeling Nodes and Leaves
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 25 of 137
Editing Hierarchies
Prerequisites
You are in the change mode for the hierarchy (see Edit Hierarchy).
Functions
In the hierarchy maintenance, via the button Level Maintenance, you can access the screen for maintaining hierarchy levels.
In the left-hand screen area Maintain Level Descriptions for Display in the Query, you can see a list of all hierarchy levels and associated descriptions. By
double clicking on the level or by clicking on a description, you can access a dialog box in which you can enter new descriptions for a node level. Enter as
expressive descriptions as you can for the different hierarchy levels.
If you choose not to enter texts for the hierarchy levels, BEx uses the generic names Level 01, level 02, level n .
Example
For a geographical hierarchy with three levels, change the short description of the individual levels as follows:
Level 1: In place of Level 01, enter Continent .
Level 2: In place of Level 02, enter Country.
Level 3: In place of Level 03, enter Region .
When navigating in the query, after choosing Expand Hierarchy from the BEx context menu or from the enhanced context menu in the Web, you have access
to the following options:
Expand Hierarchy
Continent
Country
Region
When loading hierarchies, the hierarchy attributes are transferred from the old active version of the hierarchy. When loading via the PSA, they
can be overridden by transfer rules.
Prerequisites
You are in the change mode for the hierarchy (see Edit Hierarchy).
Functions
You can use the pushbutton Hierarchy Attributes in hierarchy maintenance to set the following Display Parameters for the Hierarchy Display in the Query.
Do Not Display Leaves for Inner Nodes in the Query
A postable node with lower-level nodes is displayed in the query by default with a leaf with the same text inserted under it (see Modeling Nodes and Leaves).
If you set this indicator, these (additional) leaves are suppressed.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 26 of 137
If the aggregation of values of the lower-level nodes does not return a sensible value for the postable node, you can use this option for a
technical correction. The correction value which you post to the postable node in this case is hidden.
Display Behavior for Leaves of Inner Nodes Can Be Changed
You can set whether it is to be possible to change the above display behavior at query runtime. The changeability of the display behavior has the following
possible values:
' ' : The display behavior cannot be changed in the query.
'X': The display behavior can be changed in the query.
Suppressing the Not Assigned Node
If there are characteristic values that do not appear in the hierarchy, a Not Assigned node is added to the hierarchy by default. If you set this indicator, no
Not Assigned node is created for the hierarchy (REST_H, see Hierarchy Nodes).
If the hierarchy is selected as a presentation hierarchy, only those characteristic values are filtered that do not appear as leaves or postable
nodes in the hierarchy. This is the case even if they are posted to data.
There are 1000 characteristic values for the characteristic material. A hierarchy with material groups from a total of 100 materials is defined
for it. The remaining (1000-100), that is 900 characteristic values, are positioned under node Not Assigned. If you set the indicator for
Suppressing Not Assigned Nodes, these 900 characteristic values are filtered out.
Another typical example of where this is used is with a hierarchy that only contains the cost centers of one of many controlling areas.
Set this indicator when the hierarchy only contains one (relatively small) proportion of the characteristic values. This may well improve performance.
If a characteristic has a large number of values but only a fraction of them appear in the hierarchy, the Not Assigned node has a lot of nodes and the internal
hierarchy presentation is very large. This can lead to longer runtimes and problems when saving.
Root / Sum Position
Here you can determine,
whether the root and therefore the sum item of the hierarchy is displayed bottom right in the query, with the leaves at the top
whether the root and therefore the sum item of the hierarchy is displayed top right in the query, with the leaves at the bottom
This setting can be overridden in the Query Designer (see Characteristic Properties): Select the option Position of Lower-Level Nodes in the
User Setting column i n the dialog box Characteristic Properties, under the group header Hierarchy Properties , and choose up or down
as the value.
You can also override this setting in the BEx Analyzer as well as in the Web using the appropriate entry in the context menu.
Start Drilldown Level
Here you can determine how many hierarchy levels in the query are included in drilldown when it is first performed. If no number is entered, then three levels
are displayed.
This setting can be overriden in the Query Designer (see Characteristic Properties): In the dialog box Characteristic Properties, under the
group header Hierarchy Properties, select the option Expand to Level in the column User Setting , and enter the value >0.
You can also override this setting in the BEx Analyzer as well as in the Web using the appropriate entry in the context menu.
Procedure
1. In the SAP Reference IMG, choose BI Customizing Implementation Guide Business Intelligence Reporting-Relevant Settings
General Reporting-Settings Set F4 Help and Hierarchies for Time Characteristics / OLAP Settings .
2. Choose the Virtual Time Hierarchies tab page .
In the left-hand area, you can see the active time characteristics in the system (such as date, month, calendar month, and fiscal year/period), for which
hierarchies have been made available in characteristic maintenance.
3. Choose the required time characteristic using the button (for example, date).
In the upper screen area, you can see the hierarchies suitable for the time characteristic.
4. Select the required hierarchies by double-clicking on them or using Drag&Drop to move them into the lower-right screen area.
The hierarchy tree symbol
shows that the selected hierarchies are active and listed in the lower area.
5. You can now select the start level for the activated hierarchies in the lower screen area, and enter short, medium, and long descriptions.
With the start level, you determine the level to which the hierarchy is expanded in the opening drilldown.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 27 of 137
Save .
Result
You have activated virtual time hierarchies and can now use them in the Query Designer for reporting.
Use
Hierarchy properties are fixed in InfoObject maintenance for a characteristic and are valid for all hierarchies that have been created for this characteristic.
Note that hierarchy attributes are fixed in hierarchy maintenance and only apply to the hierarchy that is currently opened. You can find more
information under Hierarchy Attributes.
Structure
The following hierarchy properties are available:
Version dependency
Time dependency
Intervals
Reversing the sign
Integration
If there are different hierarchy versions in the source system, you can map these into BI. You can also generate different versions for one and the same
hierarchy from the source system.
Features
In InfoObject maintenance, you specify whether the hierarchy basic characteristic has the property Version-Dependent Hierarchies (see Tab Page:
Hierarchy).
In the master data maintenance for characteristic 0HIER_VERS, you can create and change hierarchy versions (see Maintain Hierarchy Versions).
In the hierarchy maintenance you edit the hierarchy versions (see Editing Hierarchies).
In reporting you can use the different hierarchy versions, for example, for comparisons. You can display the hierarchy version data sequentially or
simultaneously in different columns. Variables can be used to set parameters for the different versions.
Example
For example, you can create a company structure by area for different hierarchy versions for the InfoObject Main Area in order to execute a plan-actual
comparison in a query.
Plan-actual comparison with hierarchy versions
Hierarchy version PLAN
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 28 of 137
Area 1
Area 2
Area 2
Area 3
Area 3
Area 4
Area 4
See also:
Hierarchy Properties
If you want to create or modify versions in a particular language, select the corresponding language key.
3. Choose
Execute . The Characteristic 0HIER_VERS Maintain Master Data: List screen appears.
In the hierarchy maintenance screen, you can create various hierarchy versions under the same technical name by copying a hierarchy and
saving the copy as a different hierarchy version. However, you can only specify the description of the hierarchy version in the master data
maintenance screens of the 0HIER_VERS
characteristic .
Result
You have modified an existing version of a hierarchy or created a new hierarchy version including a descriptive text. You can use this version when you create
a hierarchy with versions.
The texts for the hierarchy version and the description are independent of the hierarchy basic characteristic.
If, for example, the version 001 has the text Plan, each hierarchy version 001 has the text Plan, regardless of which hierarchy basic
characteristic was used to define the hierarchy.
See also:
Hierarchy Versions
Hierarchy Properties
Functions
In InfoObject maintenance, you can set whether and in which ways a hierarchy is time dependent. You can choose from the following:
whether the hierarchy is not time dependent ( Hierarchy Not Time-Dependent). This is set by default.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 29 of 137
In reporting, the system returns the valid hierarchy when a query is executed using the query key date.
Within a restructuring company areas, you can create time-dependent versions of a hierarchy for the Main Area InfoObject. This enables you
to compare the restructuring over different time periods in a query.
Time-dependent hierarchy 01/01/1999 - 05/31/1999
Main Area NORTH
Area 1
Area 2
Area 2
Area 3
Area 3
Area 4
Area 4
In reporting, you can work in the individual columns of the report structure with fixed date values. You may want to do this to compare Main
Area North in the Time-Dependent Hierarchy 05/31/2000 with Main Area North in the Time-Dependent Hierarchy 06/01/2000 ( simulation).
Time-Dependent Hierarchy Structures
You can either load time-dependent hierarchies (see Loading Time-Dependent Hierarchies) or create them in the BI system (see Creating a Hierarchy).
In hierarchy maintenance, you can determine a valid time interval for each hierarchy node ( Valid to and Valid from fields).
In reporting, a hierarchy with time-dependent hierarchy structures is created either for the current key date or for the key date defined for the query. In
addition, you can evaluate a hierarchy historically using the temporal hierarchy join.
You can assign an employee to different cost centers at different times within the context of a restructuring.
In the context menu of a hierarchy, choose Display Hierarchy to access the hierarchy display: Each node and leaf has been given a date symbol. Hierarchy
nodes that are assigned to different places in the hierarchy structure, depending on the time, are displayed more than once. By double clicking on a hierarchy
node, you can display the associated validity period for the node relation.
In the following example, you can double click on the Jones leaf to see that the worker Jones was assigned to region USA between
01/01/1999 and 05/31/1999 and Puerto Rico from 06/01/1999 to 12/31/1999.
In order to use a hierarchy with a time-dependent hierarchy structure in reporting, you require the following settings in the BEx Query Designer:
a. If you want to evaluate a hierarchy with a time-dependent hierarchy structure for a fixed key date, enter the key date in query definition.
b. If you want to evaluate a hierarchy with a time-dependent hierarchy structure historically, for a key date that is to be derived from the data, choose
the temporal hierarchy join option and specify the derivation type for the key date.
For a more detailed description of the functions and differences between the two evaluation views, see Time-Dependent Hierarchy Structures in the Query.
In maintenance of the key date derivation type (RSTHJTMAINT) determine the rule you want to use to determine the key date from the data. In this way you
determine the time characteristic and way in which the key date is to be derived.
1. First determine the time characteristic.
If you choose a Basic Time Characteristic as a time characteristic (for example, 0CALDAY, 0CALMONTH, 0FISCPER), you can use a key date
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 30 of 137
derivation type of this kind for all InfoProviders that contain exactly one time characteristic that references the selected basic time characteristic. If there
are several time characteristics in an InfoProvider that reference the basic time characteristic, you have to either determine the time characteristic more
specifically or select a particular time characteristic from a particular InfoSet ( Time Characteristic from InfoSet ).
2. Determine how you want the key date to be derived from the time characteristic.
The following derivation types are available:
First day of the period
Last day of the period
Delay by number of days (you specify this in the Delay by Days field). In this case, the key date is calculated from the first day in the period plus the
number of days specified minus 1. If the key date does not fall within the period, the last day of the period is used.
Key date derivation type with (basic characteristic = 0CALMONTH, derivation type = first day of period):
For January 2005 the key date is calculated as 1/1/2005.
For February 2005 the key date is calculated as 2/1/2005.
Key date derivation with (basic characteristic = 0FISCPER, derivation type = delay by number of days and delay = 29):
For K4/01.2005 the key date is calculated as 1/29/2005.
For K4/02.2005 the key date is calculated as 2/28/2005.
For K4/03.2005 the key date is calculated as 3/29/2005.
Note that the way in which you determine the key date derivation type affects performance. The number of data records that the OLAP
processor reads corresponds to the level of detail on which the time characteristic and the leaf level lie. For this reason, choose the time
characteristic as approximately as possible in order to keep the hierarchy small.
A small hierarchy has 100 leaves. For a period of 12 months, the OLAP Processor reads 1200 data records at month level. At day level, it
reads 36500 data records.
See also:
Time-Dependent Hierarchies
The time-dependent hierarchy for characteristic M has the following structure:
Hierarchy for Characteristic M
Successor Node
Predecessor Node
Root node
Valid From
Valid Until
1.1.
31.12.
Node1
Root node
1.1.
31.12.
Node2
Node1
1.1.
16.2.
Leaf1
Node2
1.1.
16.2.
Leaf2
Node2
1.1.
31.1.
Node4
Node1
1.1.
31.12.
Leaf2
Node4
1.2.
31.12.
Leaf3
Node4
1.1.
31.12.
Node3
Root node
1.1.
31.12.
Node2
Node3
17.2.
31.12.
Leaf1
Node2
17.2.
31.12.
Leaf4
Node2
17.2.
31.12.
Node5
Node3
1.1.
31.1.
Leaf5
Node5
1.1.
31.1.
Node6
Root node
1.2.
31.12.
Leaf5
Node6
1.2.
31.12.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 31 of 137
InfoProviders
An InfoProvider with the characteristics Characteristic M , Day , Month and the key figure Key Figure contains the following data:
InfoProvider Data
Characteristic M
Day
Month
Key Figure
Leaf1
15.1.
Jan
10
Leaf2
15.1.
Jan
20
Leaf3
15.1.
Jan
30
Leaf4
15.1.
Jan
40
Leaf5
15.1.
Jan
50
Leaf1
15.2.
Feb
25
Leaf1
28.2.
Feb
Leaf2
15.2.
Feb
15
Leaf3
28.2.
Feb
Leaf4
28.2.
Feb
35
Leaf5
28.2.
Feb
25
Key Figure
Overall Result
150
110
260
* Root
110
75
185
** Node1
60
50
110
*** Node2
10
30
40
**** Leaf1
10
30
40
*** Node4
50
20
70
**** Leaf2
20
15
35
**** Leaf3
30
35
** Node6
50
25
75
*** Leaf5
50
25
75
* Non-assigned
40
35
75
** Leaf 4
40
35
75
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 32 of 137
Key Figure
Overall Result
150
110
260
* Root node
110
110
220
** Node1
60
45
105
*** Node2
30
25
55
**** Leaf1
10
25
35
**** Leaf2
20
*** Node4
30
**** Leaf2
20
20
50
15
15
**** Leaf3
30
35
** Node3
50
40
90
*** Node2
40
40
**** Leaf1
**** Leaf4
35
35
*** Node5
50
*** Leaf5
50
50
50
** Node6
25
25
*** Leaf5
25
25
* Non-assigned
40
40
** Leaf4
40
40
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 33 of 137
The value of leaf1 on 15.2. is assigned to node2 which has node1 as its predecessor because leaf1 belongs to this hierarchy path on the posting
day.
The value of leaf1 on 28.2. is assigned to node2 which has node3 as its predecessor because leaf1 belongs to this hierarchy path on the posting
day.
The value of leaf4 on 15.1. is assigned to the non-assigned node because leaf4 did not belong to this hierarchy on the posting day.
Query2: Restriction for key figure
A query based on the data specified above, with one structural element key figure restricted to node2 key figure(node2) and month in the columns, produces
the following result:
Query2 Result
Key Figure(Node2)
Jan
30
Feb
65
Result
95
The value for January is the total of the values ( leaf1, 15.1. ) and ( leaf2, 15.1. ).
The value for February is the total of values ( leaf1, 15.2. ) ( leaf2, 28.2. ) and ( leaf4, 28.2. ).
Query3: Presentation hierarchy with filters on nodes
If, in addition, query1 is filtered using characteristicM = node2 , the following result is produced:
Query2 Result
Key Figure (Jan)
Key Figure
Overall Result
30
65
95
* Node2
30
25
55
** Leaf1
10
25
35
** Leaf2
20
20
* Node2
40
40
** Leaf1
** Leaf4
35
35
If the time interval of the new hierarchy overlaps the time interval of the existing hierarchy, the time interval of the existing hierarchy is reduced
so that the two time intervals run seamlessly into one another.
This does not apply if the time interval of the existing hierarchy contains the time interval of the new hierarchy, meaning that the from-date of
the new hierarchy is greater than the from-date of the existing hierarchy and the to-date of the new hierarchy is less than the to-date of the
existing hierarchy. In this case, the new hierarchy cannot be loaded. You have to change the time interval for either the new hierarchy or the
existing hierarchy, so that the time interval of the new hierarchy is not contained in the time interval of the existing hierarchy.
If there are existing hierarchies with time intervals that are included in the time interval of a new hierarchy, the existing hierarchies are deleted.
If you do not want to specify the validity of a hierarchy in advance, you specify the current date as the from-date and the latest date possible
as the to-date, for example, 12.31.9999. You do the same for any other hierarchies that are subsequently loaded. When you load each
hierarchy, the system automatically sets the to-date of the last hierarchy that was loaded as the day before the from-date of the new
hierarchy. This procedure applies only to hierarchies that support transfer rules, see Special Features when Loading Data using the PSA.
Whether the hierarchy is extracted as time-dependent or not depends on the properties the hierarchy has in the SAP source system and in the DataSource.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 34 of 137
See also:
Time-Dependent Hierarchy
1.1.2.7.3 Intervals
Use
Under both postable and not postable nodes you can hang an interval instead of a quantity of leaves. An interval includes a quantity of characteristic values
for the hierarchy basic characteristic and is defined by its lower limit (the From-<Characteristic Value> ) and its upper limit (the To-<Characteristic Value> ).
Since an interval corresponds to a quantity of leaves, you cannot hang additional objects under an interval. (For more information about the individual hierarchy
nodes, see Hierarchy Nodes.)
Features
In the InfoObject maintenance, you define whether intervals in hierarchies are permitted for the hierarchy basic characteristic (see Tab Page: Hierarchy).
In the hierarchy maintenance, you model the hierarchy with intervals, if necessary.
If you add an interval to a hierarchy in the hierarchy maintenance, the system creates a node for the 0HIER_NODE artificial characteristic. This node
represents the interval. The limits of the interval are entered in the /BI*/H<IOBJNM> hierarchy table, which is generated for each hierarchy basic
characteristic, into the LEAFFROM and LEAFTO fields. The description of the interval is comprised of the description for the characteristic values
selected for these fields.
You can also create intervals for such characteristic values for which no master data has yet been posted. If you post new data for the respective
characteristic values, the system automatically arranges it. In this way you can avoid having to enhance the hierarchy when accessing master data again.
In reporting, an interval is not displayed as a node, but rather as triggered: All leaves, which are in the interval and for which data is posted in the
InfoProvider, are displayed.
Example
Example 1 Cost Element Hierarchy
In a hierarchy for the Cost Element (0COSTELMNT) InfoObject, you want to add cost elements 100 to 1000 as an interval under the Material Costs node.
Until now, your BI system posted for the hierarchy basic characteristic only the characteristic values 100 to 500 in the InfoProvider.
1. Create the Material Costs text node.
2. You can create the interval directly under the Material Costs node. In this case, the leaves for the individual cost elements are likewise displayed
directly under the Material Costs node in the query.
However, if you also want to see a node in the query that summarizes the cost elements included in the interval, you can then create a Cost
Element 100 to 1000 text node under the node.
This way of modeling intermediate nodes is also suitable, for example, for a customer hierarchy in which you do not want see all customers at
the same time, but rather want to display groups, such as Customers A-C and Customers D-F .
3. Under the Material Costs (or Cost Element 100 to 1000 ) parent node, create an interval for Cost Element InfoObject.
4. Specify the interval limits in the Create Interval dialog box. Since the Controlling Area (0CO_AREA) InfoObject is compounded to the Cost Element
(0COSTELMNT) InfoObject, specifying the required controlling area you want is also necessary. Example:
From-ContArea 1000
From-CostElmnt 100
To-ContArea 1000
To-CostElmnt 1000
Long Description 10000000000100 10000000001000
The Node Level corresponds to the place where you add the interval.
If the interval nodes (or a part of the values) in the hierarchy already exists, you receive a Warning: Duplicate Nodes . Usually, the user does
not want values to occur multiple times in a hierarchy. However, you can decide whether the system is to transfer the duplicate nodes. (The
latter is the default setting.)
In the query, the leaves with values 100 to 500 are displayed in the cost element hierarchy under the node Material Costs (or Cost Element 100 to 1000 ).
If cost elements are added to the material costs, and you create the master data for the values from 501 , the new cost elements are automatically displayed
in the query as soon as the transaction data has been loaded.
The following graphic shows both types of modeling cost element hierarchies:
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 35 of 137
See also:
Hierarchy Properties
Prerequisites
To be able to use this function, the hierarchy must have flexible hierarchy structures. This is the case when the hierarchy is loaded via the PSA and, as a
result, has transfer rules. Refer to Special Features when Loading using the PSA.
Functions
In InfoObject maintenance, define whether reverse sign function is possible for the hierarchy basic characteristic (see Tab Page: Hierarchy). If reversing the
sign is activated for hierarchies, the attribute 0SIGNCH is included in the communication structure for hierarchy nodes. You can find an example under Special
Features when Loading using the PSA.
In hierarchy maintenance, you can specify whether the sign for transaction data booked to this node is to be reversed for each hierarchy node.
In the query definition, create a formula variable for reversing the sign.
You can find additional information about this procedure under Using Sign Reversal.
Example
You have a hierarchy based on receipts and expenditure. According to accountancy logic, income is displayed with a negative sign and expenses are shown
with positive sign. Adding these values produces the profit.
Under certain circumstances, it may be advisable to circumvent accountancy logic and display the query income with a positive sign. This is where reversing
the sign comes in. In our example, one would only activate sign reversal for income.
The following graphic illustrates such a case: Reversing the sign is activated for the node REV ( Revenue) and its sub-nodes. In the third column, the
amounts appear without sign reversal, that is, with a negative sign. In the fourth column, the amounts are calculated using a formula variable with reversing the
sign and then displayed as a positive value.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 36 of 137
See also:
Hierarchy Properties
Choose Change Node from the context menu of a hierarchy node to set sign
reversal for this node. Note: The sign reversal only applies to the node itself not to
its children or subnodes.
You are in the Query Designer. Under Key Figures , use the context menu to choose New Formula. The Formula Builder opens.
Under Formula Variables, choose New Variable from the context menu. The variables wizard appears.
Enter a name and a short text for the variable. Choose Replacement Path as the processing type .
Choose your hierarchy basic characteristic.
Choose Hierarchy Attribute as the replacement. The attribute Sign Reversal is automatically displayed.
Save your formula variable. When defining the formula variable, you get the factor 1 or 1, with which you can multiply the required key figure.
Sign Reversal
Hierarchy Properties
Prerequisites
You have an InfoProvider that includes two characteristics (sender and receiver) that contain the same reference (and, thus, contain the same master data)
and are on the same level within the hierarchy.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 37 of 137
Note that you can only create aggregates for InfoCubes which contain both characteristics (sender and recipient).
Features
To eliminate internal business volume in an InfoProvider, you have to create a key figure with a reference. You then include these in the InfoProvider. The
value is kept here after the elimination during the query, but is not included in the fact table for the InfoCube (or in the DataStore object).
Creating the reference key figure
When creating a key figure, select Key Figure with Reference . In the InfoObject maintenance you have an additional tab page, Elimination . Enter one or
more characteristic pairs here regarding the key figure to be eliminated. In doing so, always choose a sending characteristic and a receiving characteristic.
A typical example for such a pair of characteristics is Sending Cost Center and Receiving Cost Center . The characteristics of such a pair must have the
same reference characteristic. You can also enter the names of the navigation attributes here.
You can display permitted characteristics for an elimination characteristic by using the input help.
If have specified several characteristic pairs, you still have to specify one of the following by using the selection buttons:
all characteristic pairs need to be eliminated - then the key figure value is only eliminated if the elimination condition is fulfilled for all characteristic pairs
(AND relationship)
each individual characteristic pair needs to be eliminated - then the key figure value is already eliminated as soon as the elimination condition for one of the
characteristic pairs is fulfilled (OR relationship)
Example
The profit center Production Accessories (UK) has $50.00 of internal revenue from profit center Internal Service (DE). This amount needs to be eliminated.
In the following example, different options are shown for different cases, including using these example hierarchies to display a profit center hierarchy and a
country hierarchy.
Query example 1:
You create a referenced key figure, Profit Center Sales . For your characteristic pair, you choose Profit Center (0PROFIT_CTR) and Partner Profit Center
(0PART_PRCTR). In the query you use the profit center hierarchy. The (internal business volume) amount of $50, which Production Accessories received
internally, is eliminated for this profit center for the next highest level.
Query example 2:
You create a referenced key figure, Country Sales . As your characteristic pair, you choose Country (0COUNTRY) and Partner Country (0PCOUNTRY). In
the query, you use the country hierarchy. The (internal business volume) amount for Europe and the country is eliminated, because the amount of $50 was
counted from Germany to the UK.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 38 of 137
Currency Translation
Use
You can use currency translation to translate the key figures with currency fields that exist in the source system in different currencies into a standard
currency in the BI system (for example, the local currency or company currency). Another application serves as the currency difference report, where you can
compare the actual exchange rates with the exchange rates that were valid on the posting date, thus determining the effect of changes to exchange rates. See
also, Scenarios for Currency Translation.
Features
This function enables the translation of posted data records from the source currency into a target currency, or into different target currencies, if translation is
repeated. It is based on the standard SAP function for currency translation.
Currency translation is based on currency translation types. The business transaction rules of the translation are established in the currency translation type.
A combination of various parameters (source and target currency, exchange rate type, time reference for conversion) determine how exchange rate
determination is performed for the translation. See also Currency Translation Types.
Integration
The currency translation type is stored for future use and is available for currency translations in the transformation rules for InfoCubes and in the Business
Explorer:
In the transformation rules for InfoCubes you can specify, for each key figure or data field, whether currency translation is performed during the update. In
special cases you can also run currency translation in user-defined routines in the transformation rules.
See also 1.1.4 Currency Translation in the Update.
In the Business Explorer you can:
1. Specify a currency translation in the query definition.
2. Translate currencies at query runtime. Translation is more limited here than in the query definition.
See also Currency Translation in the Business Explorer.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 39 of 137
Structure
The parameters that determine the exchange rate are the source and target currencies, exchange rate type and the time reference for the translation.
To define a translation type you have to define the exchange rate type, the time reference, and how and when the inverse exchange rate is used. Entering the
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 40 of 137
In the Business Explorer, you can only set time variable currency translations in query definition, and not for an executed query. See also
Setting Variable Target Currency in the Query Designer.
In the Time Adjustment field of type INT4, you can specify whole numbers with a +/- sign.
In the Time Adjustment from Variable field, you can specify formula variables (1FORMULA). As these values of the variables may have to be whole
numbers, they are rounded to whole numbers (integers) where necessary.
The time adjustment (regardless of whether it is fixed or variable) is always related to the InfoObject specified under Variable Time Reference .
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 41 of 137
2. In characteristic maintenance, set the currency attribute for the InfoObject which you are using to determine the target currency.
3. Define a currency translation type in which the target currency will be determined using this InfoObject.
4. In the transformation rules for your InfoCube, specify that the values for the corresponding key figures are to be translated in the transformation and enter
the previously defined translation type.
or
Use the currency translation type for currency translation in BEx.
Example
You want to load data from a CSV file and update it into an InfoCube. The InfoCube has two key figures (kyf1 and kyf2) of type Amount with different unit
InfoObjects.
CSV file extract:
Z_COUNTRY
kyf1
kyf_unit1
USD
CH
USD
In the update rules, specify that kyf1 be updated to the InfoCube without changes. kyf2 is filled from the source key figure kyf1 and currency translation is
performed with currency translation type WUA01. In WUA01 you have specified that the source currency be determined from the data record and that
InfoObject Z_COUNTRY be used to determine the target currency.
In InfoObject maintenance on the Business Explorer tab page you have determined a currency attribute for InfoObject Z_COUNTRY 0CURRENCY, for
example. See also Tab Page: Business Explorer
For the kyf2 key figure of the InfoCube to be updated with a currency translation, Z_COUNTRY has to contain the attributes D and CH. Furthermore,
0CURRENCY has to contain valid currency units and corresponding exchange rates have to have been maintained.
Result:
Characteristic bearing master data, Z_COUNTRY:
Characteristic: Z_COUNTRY
EUR
...
CH
CHF
...
...
...
...
In the transformation rules, 1 USD was translated into EUR and 2 USD into CHF.
Procedure
1. Open the Query Designer.
2. Create a new (dummy) query for an InfoProvider that contains the InfoObjects Exchange Rate Type (0RTYPE), Currency Key (0CURRENCY) or Date
(0DATE), depending on which variable you require. You need this query to be able to create a variable in the Query Designer. For more information, see
Defining New Queriess.
3. Using the
symbol next to the relevant InfoObject (0RTYPE, 0CURRENCY or 0DATE) choose the New Variable entry. The variables editor appears.
See also Defining Variabless.
4. Enter a description for the variable.
5. If necessary, change the automatically generated suggestion for the technical name of the variable.
6. Choose the required processing type in the Processing by field . If you choose User Entry/Default Value , the system requests that you enter the
currency in a dialog box in the query.
7. On the Currencies and Units tab page, select an appropriate dimension.
8. Choose OK . The variable is saved with the settings you made and the variables editor closes.
9. Leave the Query Designer.
Result
The variable can be used in the currency translation type.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 42 of 137
Procedure
1. On the SAP Easy Access screen for SAP NetWeaver Business Intelligence, choose SAP Menu
Currency Translation Types .
2. Enter a technical name for the translation type. The name must be between three and ten characters long and begin with a letter. Choose
Edit Currency Translation Type screen appears.
3. Enter a description.
Create. The
Note that with InfoSets, the InfoObject can occur several times within the InfoSet and therefore may not be unique. In this case, you have to
specify the unique field alias name of the InfoObject in the InfoSet. The field alias name consists of: <InfoSet name><three
underscores><field alias>. You can display the field alias in the InfoSet Builder using Switch Technical Name On/Off .
Example: In the InfoSet ISTEST, characteristic 0CURR occurs twice. The field alias names are ISTEST___F00009 and ISTEST___F00023.
Time Reference Tab Page
9. Select a fixed or a variable time base for the currency translation.
With a fixed time reference, you can choose between:
Selection upon translation
Current date
Fixed key date
Time reference using a variable (variable for 0DATE)
Query key date . In this case, the time reference is the key date that is set in the query properties in the Query Designer.
With a variable time reference, you can choose:
Standard InfoObject (standard time characteristic: 0FISCYEAR, 0FISCPER, 0CALYEAR, 0CALQUARTER, 0CALMONTH, 0CALWEEK or
0CALDAY), which is used to determine the time of the translation
Special InfoObject. This special InfoObject has to have the same properties as the standard InfoObject or it has to reference the standard
InfoObject. The aforementioned notes on using InfoSets also apply here.
In the Time Adjustment field, whole numbers with +/- sign
In the Time Adjustment from Variable field, you can specify formula variables (1FORMULA)
The time adjustment (regardless of whether it is fixed or variable) is always related to the InfoObject specified under Variable Time Reference .
10. Save your entries.
Result
The currency translation type is available for translating currencies when you perform a transformation for InfoCubes and when you analyze data in the
Business Explorer.
See also:
Currency Translation Type
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 43 of 137
Procedure
1. In the Data Warehousing Workbench under Modeling, choose the source system tree .
2. In the context menu of your
SAP Source System , choose Transfer Exchange Rates . The Transfer Exchange Rates: Selection screen appears.
3. Choose the exchange rate type that you want to load and the date from which changes should be transferred.
4. Under Mode you can select whether you want to simulate upload or update or copy the exchange rates. With the Update exchange rates option,
existing records are updated. With the Recopy exchange rates option, table TCURR is deleted before new records are loaded.
5. Choose
Execute .
See also:
Exchange Rates for Currencies from Flat Files
Procedure
1. In the Data Warehousing Workbench under Modeling , choose the source system tree.
2. In the context menu of your
SAP Source System , choose Transfer Global Settings . The Transfer Global Settings: Selection screen appears.
3. Under Transfer Global Table Contents, select the Currencies field.
4. Under Mode you can select whether you want to simulate upload or update or copy the exchange rates. With the Update exchange rates option,
existing records are updated. With the Recopy exchange rates option, table TCURR is deleted before new records are loaded.
5. Choose
Execute .
See also:
Transferring Global Settings
Prerequisites
Your flat file should have the following format (corresponds to table TCURR without CLIENT field):
Field
Data Type
Length
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Decimal Places
Meaning
Page 44 of 137
KURST
CHAR
FCURR
CUKY
From currency
TCURR
CUKY
To currency
GDATU
CHAR
UKURS
DEC
Exchange rate
FFACT
DEC
TFACT
DEC
FCURR
TCURR
GDATU
UKURS
FFACT
TFACT
EUR
USD
20020101
1.00010
EUR
USD
20020112
0.98300
Execute .
If the CSV file contains exchange rate types that do not exist in the BI system (table TCURV), this will be noted in the log. Entries are
displayed as follows: exchange rate type &cv1 not in TCURV table. The data from the CSV file is then written to the TCURR table. The
missing entries for TCURV can be maintained manually in the IMG ( SAP Customizing Implementation Guide SAP NetWeaver SAP
Business Information Warehouse General Settings Currencies Check Exchange Rate Types ).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 45 of 137
With a variable currency, the currency is determined by an InfoObject, (ODOC_CURRCY) for example.
For more information, see Creating InfoObjects: Key Figures.
Features
Transformations can be performed for key figures in two ways:
1. Every key figure in an InfoCube (target key figure) has a corresponding key figure in the source (source key figure). No currency translation takes place.
2. There is no corresponding source key figure in the InfoSource for the target key figure in the InfoCube.
a. You can assign a source key figure of the same type to the target key figure (sales revenue instead of sales quantity revenue, for example).
If the currencies of both of the key figures are the same, no currency translation can take place.
If the currencies are different, a translation can take place either using a currency translation type or by simply assigning a currency.
The following table provides an overview of possible combinations where the currency is not the same in the source and target key figures:
Source key figure currency
fixed
variable
No CT
fixed
fixed
CT
variable
fixed
CT
variable
variable
CT or assignment possible
b. If there is no corresponding source key figure of the same type, you have to fill the key figure of the target using a routine.
If the currency of the target key figure is fixed, currency translation is not performed. This means that if translation is required, you have to
execute it in a routine.
If the currency of the target key figure is variable, you also have to assign a variable source currency to the routine. You can use input help to
select a currency from the variable currencies that exist for the target. You have two options:
You can select a variable currency and assign it.
You can select a currency translation type and a currency into which you wish to translate (to currency).
By default, the to currency is the target currency if it is included in the target.
Creating a routine for currency translation:
If you want to translate currencies during the transformation but currency translation is not available for one of the reasons stated above, you can create a
routine. In transformation rule definition, choose Routine with Unit . You get an additional return parameter UNIT in the routine editor and the target currency is
determined using the value of this parameter.
For more information, see Routines in Transformations.
Features
When you want to translate a currency from key figures in the query, you can specify the translation type in two places:
1. In the query definition for individual key figures or structure elements
2. In the executed query, using the context menu for all elements of key figure type Amount
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 46 of 137
has been configured in your query, this function is available and you can use it in your query results.
Convert to Currency: Select this option to translate results into a specific currency. In the dropdown box, select the currency into which the results are to
be translated.
Use Currency Translation : after you have selected the target currency for the results, select the type of translation to be used for the conversion
from this dropdown box.
Consider Translation from Query Definition : select this checkbox to first convert into the currency defined in the query, and then into the currency as
customized in these settings.
Show Original Currency: Select this option to disable currency conversions.
More information: Query Properties
In BEx Web applications, a simple currency translation is also available in the context menu. However, in the input help for the target currency, only the
currencies for which exchange rates are maintained in the system are displayed, whereas all currencies are displayed in the BEx Analyzer.
More information: Making Currency Translations
Restrictions:
The following currency translation types are not available at the runtime of the query.
Inactive translation types
Translation types for which a variable is stored (variable time reference, source currency from variable, target currency from variable, and so on)
Translation types in which an InfoObject is used for determining the source currency or target currency
Generally only when selection elements are used (formulas are applied to values that have already been translated)
When formula variables with the dimensions Amount and Price are used
When formula variables with replacement paths are used
With key figure attributes if the key figure is of type Amount
Procedure
1.
2.
3.
4.
5.
6.
7.
You are in the Query Designer. In the properties for your key figure of type amount, choose the Translations tab page.
Select your translation type under Currency Translation .
Select the Variables Entry field and choose New Variable. The variables editor appears. See Defining Variables.
Enter a description for the variable.
If necessary, change the automatically generated suggestion for the technical name of the variable.
In the Processing by field, choose the processing type User Entry/Default Value. The Currency characteristic (0CURRENCY) is the default.
Under Details , choose between the following input options:
optional
mandatory
mandatory, initial values not permitted
This field is ready for input by default.
8. Assign a default value for the variable.
9. Choose OK . The variable is saved with the settings you have made and the variables editor closes.
Result
When you execute the query, the system requests the variable for the target currency.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 47 of 137
Example 1:
None of the elements have a defined currency translation type, that is, all currencies will be displayed in the currency DB:
Example 2:
A currency translation type with a target currency of USD is defined for the net sales key figure:
Example 3:
Only one currency translation is defined in the structure element of the line. The translation should be performed into each countrys currency.
Example 4:
First a currency translation type is defined for the structure element of the line for Canada. The translation is made into CAD. Then a currency translation type
is defined for the structure element of the line for Switzerland the translation should be made into CHF. Then a currency translation type is defined for the
key figure net sales, which stipulates that the translation should be into USD (see mapping of the structures above: 1, 2, 3):
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 48 of 137
Example 5:
The example is structured exactly as in 4), with the only difference being that a translation type for Germany and England is defined afterwards for the
structure elements of the line. Both are translated into the countrys currency (see mapping of the structures above: 1, 2, 3, 4, 5):
Example 6:
The example is structured exactly as in 5), with the only difference being that a translation type for Canada and Switzerland is defined afterwards for the
structure elements of the line. Both are translated into the country currency (see mapping of structures above: 1, 2, 3, 4, 5, 6, 7):
Features
Displaying currencies and units in the results area of a query
Values in different currencies can be fundamentally aggregated and attached by calculation to the querys results area. Linked values from different currencies
are displayed differently to values that have a common currency however. This means that there are two different cases for displaying values and currencies
when analyzing data in Business Explorer.
If all the values that go into the cell of the results area for the query have the same currency, the value belonging to it is displayed in this currency.
In certain situations, it is impossible to clearly specify number values and texts for currencies or units. In these cases, predefined texts are displayed
instead of number values and currencies or units.
If there is a division by zero in the calculation of a number value, 0/0 is displayed.
If a number value cannot be found, NOP is displayed (does not exist).
If a number value cannot be calculated due to a numeric overflow or another undefined mathematic function (for example
), the output becomes
X.
If the values that go into a cell have different currencies, then the value is displayed in a numerical, aggregated form there will be a placeholder for the
currency. The symbol * is displayed instead of a currency identifier.
The following rules are valid for aggregation behavior in the OLAP processor example:
3 USD + 5 EUR = 8 *
3 USD + 0 EUR = 3 USD
0 USD + 0 EUR = 0 USD or 0 EUR, depending on the sequence
3 USD - 3 USD + 5 EUR = 5 EUR
3 USD + 5 EUR - 3 USD = 5 *
3 USD + 5 = 8 *, since it is an initial currency = currency ERROR
To drilldown further by currency, you must define the characteristic Currency/Unit as a free characteristic in the query definition. The amounts you
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 49 of 137
specified for the different currencies can now be entered into the query cells by drilling down on Currency/Unit (1 CUDIM).
As an alternative, you can also display the various currencies in the query monitor ( SAP Easy Access Business Explorer BEx Monitor
Query Monitor ). Execute the query and choose Key Figure Definition from the results list. Currencies and units are always broken down for
display never aggregated.
Actual Revenue
Currency Key
Plan Revenue
Currency Key
USA
734
USD
700
USD
Germany
82
EUR
50
USD
Switzerland
70
50
USD
Result
886
800
USD
Drilldown by currency/unit
Country
Currency/Unit
Actual Revenue
Currency Key
Plan Revenue
Currency Key
USA
USD
US Dollar
734
USD
700
USD
Germany
EUR
Euro
82
EUR
50
USD
Switzerland
EUR
Euro
40
EUR
CHF
Swiss Franks
30
CHF
USD
US Dollar
50
USD
886
800
USD
Result
If a user is not authorized to display a certain number value of a cell in the active query, the text can be displayed in the cell instead.
If a calculated number value is made up of different currencies or units, the use can choose whether or not to display the number value. Choose mixed
values to display the number value. If the mixed values setting is not active, the text that you entered under mixed currencies is displayed.
You can change these predefined texts in Customizing. To do this, choose SAP Customizing Implementation Guide SAP NetWeaver SAP
Business Information Warehouse Reporting-Relevant Settings General Reporting Settings in Business Explorer Display of Numerical
Values in Business Explorer .
Displaying the currency key
The currency key can be displayed only in the column heading under the following conditions instead of before or after the currency value:
The currency for a query was translated into a common target currency.
The data source only provides one currency so that a translation into a common target currency is not necessary.
To display the currency key in the column header, select Properties on the column in question using the context menu. Under Display , select Scaling
Factors for Displaying Key Figures and choose OK. The currency key is taken from the column fields and displayed in the column header.
Also note the Priority Rule for Formatting Settings for this kind of formatting settings.
Setting up the currency display
In Customizing, you can make settings for how you want to display the currencies. You can decide which key you want each currency to be displayed with,
and whether the key comes before or after the value.
Alt. Text
EUR
EUR
After value
USD
Before value
GBP
Before value
The values above are displayed for each default setting as shown in the example. You can change these settings in Customizing if required.
To make the settings in Customizing, from the SAP Customizing Implementation Guide , choose SAP NetWeaver SAP Business Information
Warehouse Reporting-relevant Settings Set Alternative Display for Currencies .
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 50 of 137
Features
This function enables the conversion of updated data records from the source unit of measure into a target unit of measure, or into different target units of
measure, if the conversion is repeated. In terms of functionality, quantity conversion is structured similarly to currency translation.
In part it is based on the quantity conversion functionality in SAP NetWeaver Application Server. Simple conversions can be performed between units of
measure that belong to the same dimension (such as meters to kilometers, kilograms to grams). You can also perform InfoObject-specific conversions (for
example, two palettes (PAL) of material 4711 were ordered and this order quantity has to be converted to the stock quantity Carton (CAR)).
Quantity conversion is based on quantity conversion types. The business transaction rules of the conversion are established in the quantity conversion type.
The conversion type is a combination of different parameters (conversion factors, source and target units of measure) that determine how the conversion is
performed. For more information, see Quantity Conversion Types.
Integration
The quantity conversion type is stored for future use and is available for quantity conversions in the transformation rules for InfoCubes and in the Business
Explorer:
In the transformation rules for InfoCubes you can specify, for each key figure or data field, whether quantity conversion is performed during the update. In
certain cases you can also run quantity conversion in user-defined routines in the transformation rules.
For more information, see Quantity Conversion in Transformations.
In the Business Explorer you can:
Establish a quantity conversion in the query definition.
Translate quantities at query runtime. Translation is more limited here than in the query definition.
Example
Number
Unit
Number
Unit
Chocolate bar
Small carton
25
12
Chocolate bar
Large carton
20
Small carton
Europallet
40
Large carton
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 51 of 137
Prerequisites
You have made the following settings in characteristic maintenance on the Business Explorer tab page.
Specify a Basic Unit of Measure
The conversion of units always takes place based on the base unit of measure (as with material management conversion, table MARM R/3).
Create a Unit of Measure for the Characteristic
When creating a unit of measure for the characteristic, the system creates a DataStore object for units of measure.
You can specify the name of the quantity DataStore object, the description, and the InfoArea into which you want to add the object. The system proposes the
name: UOM<Name of InfoObject to which the quantity DataStore Object is being added>.
With objects of this type, the system generates an SID column in the database table for each characteristic and stores the characteristic attributes in the form
of SIDs.
The SID columns are automatically filled in the transformation and may not be changed in the end routine.
Assignments of quantity DataStore objects to characteristics are 1:1. This means that only one characteristic can be assigned to a quantity DataStore object
and one quantity DataStore object can be assigned to a characteristic.
You cannot enhance or change a quantity DataStore object in DataStore object maintenance because the object is generated by the system.
You can only display it.
You can fill the quantity DataStore object with data only by using a data transfer process with transformation; update rules are not supported in
this case.
If a characteristic that has a quantity DataStore object assigned to it is changed at a later time or date, (for example, changes to compounding or to the base
unit of measure), you have to delete the quantity DataStore object and regenerate it. In practice, this does not occur after the data model has been finalized.
Structure or quantity DataStore objects:
Key
<Characteristic>
Key
Key
Structure
The parameters that determine the conversion factors are the source and target unit of measure and the option you choose for determining the conversion
factors.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 52 of 137
The decisive factor in defining a conversion type is the way in which you want conversion factors to be determined. Entering source and target quantities is
optional.
Conversion Factors
The following options are available:
Using a reference InfoObject
The system tries to determine the conversion factors from the reference InfoObject you have chosen or from the associated quantity DataStore object.
If you want to convert 1000 grams into kilograms but the conversion factors are not defined in the quantity DataStore object, the system cannot perform
the conversion, even though this is a very simple conversion.
Using central units of measure (T006)
Conversion can only take place if the source unit of measure and target unit of measure belong to the same dimension (for example, meters to kilometers,
kilograms to grams, and so on).
Using reference InfoObject if available, central units of measure (T006) if not
The system tries to determine the conversion factors using the quantity DataStore object you have defined. If the system finds conversion factors, it uses
these to perform the calculation. If the system cannot determine conversion factors from the quantity DataStore object it tries again using the central units
of measure.
Using central units of measure (T006) if available, reference InfoObject if not
The system tries to find the conversion factors in the central units of measure table. If the system finds conversion factors it uses these to perform the
conversion. If the system cannot determine conversion factors from the central units of measure it tries to find conversion factors that match the
attributes of the data record by looking in the quantity DataStore object.
The settings that you can make in this regard affect performance and the decision must be strictly based on the data set.
If you only want to perform conversions within the same dimension, option 2 is most suitable.
If you are performing InfoObject-specific conversions (for example, material-specific conversions) between units that do not belong to the same dimension,
option 1 is most suitable.
In both cases, the system only accesses one database table. That table contains the conversion factors.
With option 3 and option 4, the system tries to determine conversion factors at each stage. If conversion factors are not found in the basic table (T006), the
system searches again in the quantity DataStore object, or in reverse.
The option you choose should depend on how you want to spread the conversion. If the source unit of measure and target unit of measure belong to the same
dimension for 80% of the data records that you want to convert, first try to determine factors using the central units of measure (option 4), and accept that the
system will have to search in the second table also for the remaining 20%.
The Conversion Factor from InfoObject option (as with Exchange Rate from InfoObject in currency translation types) is only available when you load data.
The key figure you enter here has to exist in the InfoProvider and the attribute this key figure has in the data record is taken as the conversion factor.
Source Unit of Measure
The source unit of measure is the unit of measure that you want to convert. The source unit of measure is determined dynamically from the data record or
from a specified InfoObject (characteristic). In addition, you can specify a fixed source unit of measure or determine the source unit of measure using a
variable.
When converting quantities in the Business Explorer, the source unit of measure is always determined from the data record.
During the data load process the source unit of measure can be determined either from the data record or using a specified characteristic that bears master
data.
You can use a fixed source unit of measure in planning functions. Data records are converted that have the same unit key as the source unit of measure.
The values in input help correspond to the values in table T006 (units of measure).
You reach the maintenance for the unit of measure in SAP Customizing Implementation Guide SAP NetWeaver General Settings Check Units
of Measure .
In reporting, you can use a source unit of measure from a variable. The variables that have been defined for InfoObject 0UNIT are used.
Target Unit of Measure
You have the following options for determining the target unit of measure:
You can enter a fixed target unit of measure in the quantity conversion type (for example, UNIT).
You can specify an InfoObject in the quantity conversion type that is used to determine the target unit of measure during the conversion. This is not the
same as defining currency attributes where you determine a currency attribute on the Business Explorer tab page in characteristic maintenance. With
quantity conversion types you determine the InfoObject in the quantity conversion type itself. Under InfoObject for Determining Unit of Measure , all
InfoObjects are listed that have at least one attribute of type Unit . You have to select one of these attributes as the corresponding quantity attribute.
Alternatively, you can determine that the target unit of measure be determined during the conversion. In the Query Designer under the properties for the
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 53 of 137
relevant key figure, you specify either a fixed target unit of measure or a variable to determine the target unit of measure.
Target quantity using InfoSet
This setting covers the same functionality as InfoObject for Determining Target Quantity . If the InfoObject that you want to use to determine the target
quantity is unique in the InfoSet (it only occurs once in the whole InfoSet), you can enter the InfoObject under InfoObject for Determining Target
Quantity .
You only have to enter the InfoObject in Target Quantity Using InfoSet if you want to determine the target quantity using an InfoObject but that occurs
more than once in the InfoSet.
The InfoSet contains InfoProviders A and B and both A and B contain InfoObject X with a quantity attribute. In this case you have to specify
exactly whether you want to use X from A or X from B to determine the target quantity. Field aliases are used in an InfoSet to ensure
uniqueness.
All the active InfoSets in the system can be displayed using input help. As long as you have selected an InfoSet, you can select an
InfoObject. All the InfoObjects with quantity attributes contained in the InfoSet can be displayed using input help.
Example
You want to load data from a CSV file and update it into an InfoCube. The InfoCube has two key figures (kyf1 and kyf2) of type Unit with different unit
InfoObjects.
CSV file extract:
Z_COUNTRY
kyf1
kyf_unit1
PAL
CH
BX
In the transformation rules, specify that kyf1 be updated to the InfoCube without changes. kyf2 is filled from the source key figure kyf1 and the unit of measure
conversion is performed using quantity conversion type WUA01. In WUA01 you have specified that the source unit of measure be determined from the data
record and that InfoObject Z_COUNTRY be used to determine the target unit of measure.
You have already chosen the associated quantity attribute 0PO_UNIT for InfoObject Z_COUNTRY.
In order that key figure kyf2 of the InfoCube is updated with the quantity conversion, Z_COUNTRY must contain the attributes D and CH. Furthermore,
0PO_UNIT has to contain valid units of measure and corresponding conversion rates have to have been maintained.
Result:
Characteristic bearing master data, Z_COUNTRY:
Characteristic: Z_COUNTRY
CAR
CH
UNIT
In the transformation rules, 1 PAL was converted into CAR and 2 BX into UNIT.
Procedure
1. Open the Query Designer.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 54 of 137
2. Create a new (dummy) query for an InfoProvider that contains the Unit InfoObject (0UNIT). You need this query to be able to create a variable in the
Query Designer. For more information, see Defining New Queriess.
3. Using the symbol
next to the InfoObject 0UNIT choose the entry New Variable . The variables editor appears. See also Defining Variabless.
4. Enter a description for the variable.
5. If necessary, change the automatically generated suggestion for the technical name of the variable.
6. Choose the required processing type in the Processing by field . If you choose User Entry/Default Value , the system requests that you enter the unit
of measure in a dialog box in the query.
7. On the Currencies and Units tab page, select Quantity as the dimension.
8. Choose OK . The variable is saved with the settings you made and the variables editor closes.
9. Leave the Query Designer.
Result
The variable can be used in the quantity conversion type.
Procedure
1. On the SAP Easy Access screen for SAP NetWeaver Business Intelligence, choose SAP Menu
Conversion Types .
2. Enter a technical name for the conversion type. The name must be between three and ten characters long and begin with a letter. Choose
The Edit Quantity Conversion Type screen appears.
3. Enter a description.
Create.
Note that with InfoSets, the InfoObject can occur several times within the InfoSet and therefore may not be unique. In this case you have to
specify the unique field alias name of the InfoObject in the InfoSet. The field alias name consists of: <InfoSet name><three
underscores><field alias>. You can display the field alias in the InfoSet Builder using Switch Technical Name On/Off .
8. Save your entries.
Result
The quantity conversion type is available for converting quantities in the update of InfoCubes and with data analysis in the Business Explorer.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 55 of 137
following tables:
T006
T006A
T006B
T006C
T006D
T006I
T006J
T006T
Procedure
1. In the Data Warehousing Workbench under Modeling , choose the source system tree .
2. In the context menu of your
SAP Source System , choose Transfer Global Settings . The Transfer Global Settings: Selection screen appears.
3. Under Transfer Global Table Contents, select the Units of Measure field.
4. Under Mode you can select whether you want to simulate upload or update or copy the tables again. With the Update Tables option, existing records
are updated. With the Rebuild Tables option, the corresponding tables are deleted before the new records are loaded.
5. Choose
Execute .
Features
Transformations can be performed for key figures in two ways:
1. Every key figure in an InfoCube (target key figure) has a corresponding key figure in the source (source key figure). Quantity conversion is not performed.
2. There is no corresponding source key figure in the InfoSource for the target key figure in the InfoCube.
a. You can assign a source key figure of the same type to the target key figure.
If the units of measure of both key figures are the same, no quantity conversion can take place.
If the units of measure are different, a conversion can take place either using a quantity conversion type or by simply assigning a unit of
measure.
b. If there is no corresponding source key figure of the same type, you have to fill the key figure of the target using a routine.
If the unit of measure of the target key figure is fixed, quantity conversion is not performed. This means that if conversion is required, you have
to execute it in a routine.
If the unit of measure of the target key figure is variable, you also have to assign a variable source unit of measure to the routine. You can use
input help to select a unit of measure from the variable units of measure that exist for the target. You have two options:
You select a variable unit of measure and assign it.
You select a quantity conversion type and a unit of measure into which you wish to convert.
Conversion Using a Quantity Conversion Type
If you have chosen an InfoObject for determining the target unit of measure in the quantity conversion type, you must heed the following when maintaining the
transformation rules:
The InfoObject for determining the target unit of measure must be contained in both the source and target systems and must be filled using a rule.
For more information, see Defining Target Units of Measure Using InfoObjects.
Routines for Quantity Conversions
If you want to convert units of measure during the transformation but quantity conversion is not available for one of the reasons stated above, you can create a
routine. In transformation rule definition, choose Routine with Unit . In the routine editor you get an additional return parameter UNIT and the target unit of
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 56 of 137
Features
If you want to convert the units of measure of the key figures in the query, you can determine the conversion type for the individual key figures or structure
elements in query definition. You can determine how the individual elements (key figures of type: amount) of the query are to be converted. You can specify
one quantity conversion type for each element.
Depending on how the target quantity determination is defined in the quantity conversion type, you can either specify a fixed target quantity or a variable that
the system uses to determine the target quantity.
See the unit conversion section in Properties of the Selection/Formula.
If a variable is used, the system will request the variable when you execute the query. For information on the procedure, see Setting Variable Target Units of
Measure in the Query Designer.
Procedure
1.
2.
3.
4.
5.
6.
7.
You are in the Query Designer. In the properties for your amount key figure, choose the Translations tab page.
Under Unit Conversion, select your conversion type.
Select the Variables Entry field and choose New Variable. The variables editor appears. See also Defining Variables.
Enter a description for the variable.
If necessary, change the automatically generated suggestion for the technical name of the variable.
In the Processing by field, choose the processing type User Entry/Default Value. The unit of measure characteristic (0UNIT) is the default.
Under Details , specify whether the entry is to be:
Optional
Mandatory
Mandatory, initial values not permitted
8. Allocate a default value for the variable.
9. Choose OK . The variable is saved with the settings you made and the variable editor closes.
Result
When you execute the query the system requests the variable for the target unit of measure.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 57 of 137
0base_uom
Base Unit of Measure
0apo_storgu
Stock Level
4711
UNIT
PAL
4712
KG
PAL
4713
UNIT
PAL
The unit 0base_uom is entered as the base unit of measure in master data maintenance on the Business Explorer tab page.
The DataStore Object UOM07 is also created and activated in master data maintenance ( tab page: Business Explorer Units of Measure for Char. ).
The structure of a DataStore Object always complies with the following rule:
<characteristic> <compounding for characteristic, where applicable>
<target unit of measure> <base unit of measure> <conversion factor: counter>
<conversion factor: denominator> <SID columns for all characteristics>
1b) Quantity DataStore Object: UOM07 (SID columns are not listed in the following table)
#Cmat07
#0unit
0base_uom
0uomz1d
0uomn1d
4711
UNIT
25
4711
CAR
UNIT
240
4711
BX
UNIT
24
4711
PAL
UNIT
9600
4712
KG
1000
4712
UNIT
KG
4713
CAR
UNIT
4713
BX
UNIT
#0unit
0base_uom
0uomz1d
0uomn1d
4711
UNIT
25
4711
CAR
UNIT
240
4711
BX
UNIT
24
4711
PAL
UNIT
9600
4711
CAR
BX
10
The direct reference between CAR and BX is specified in the last row. This entry is incorrect on one hand because the primary key is violated if data record 2
already exists in the object. If record 2 does not already exist, the entry is still incorrect because the column 0base_uom does not contain the base unit of
measure from the master data (UNIT).
2) Master Data cruom1 (can be used to calculate the unit of measure from the unit of measure attribute)
#cruom1
Crf1
0sales_unit
0unit
M1
C1
KG
M2
C2
CAR
BX
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 58 of 137
M3
C3
PAL
UNIT
3) Master Data cruom1KL (can be used to calculate the unit of measure from the measure attribute)
#cruom1
#0bp_contper
Crf1
0sales_unit
0unit
M1
Akino
C1
KG
M2
Bertolini
C2
CAR
BX
M3
Smith
C3
PAL
UNIT
Examples
The examples listed below are based on the example data given above. These examples illustrate the results produced by the different options available with
unit conversion:
Examples of Conversion with Fixed Target Unit of Measure
Examples of Conversion Using Factor from InfoObject
Examples of Conversion with Target Unit of Measure Using Attribute in InfoObject
Examples of Conversion with Fixed Target Unit of Measure and Dynamic Determination Without Options
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
1,5
1,7
12 CAR
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
1,5
1,7
12 CAR
2880 UNIT
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
1,5
1,7
12 CAR
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
1,5
1,7
12 CAR
120 BX
C2faCtorf
C2kyf1
C2kyf2
Cmat07
C2faCtor
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 59 of 137
4711
1,5
1,7
12 CAR
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
1,5
1,7
12 CAR
0,3 PAL
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
12 CAR
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
12 CAR
0,3 PAL
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
12 CAR
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
12 CAR
20.4 PAL
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
M3
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 60 of 137
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
0,3 PAL
M3
Example 2: Conversion with Target Unit of Measure Using Attribute in InfoObject CRUOM1
Source
C2REQnr
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
M1
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
72000 G
M1
Example 3: Conversion with Target Unit of Measure Using Attribute in InfoObject CRUOM1
Source
C2REQnr
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
M2
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
120 BX
M2
Example 4: Conversion with Target Unit of Measure Using Attribute in InfoObject CRUOM1
Source
C2REQnr
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
M3
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
0,025
1,7
12 CAR
2880 UNIT
M3
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 61 of 137
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
18 PAL
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
18 PAL
172800 UNIT
Example 2: Conversion Factors Determined Dynamically Using Central Units of Measure Only (T006)
Source
C2REQnr
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
18 PAL
Cmat07
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
0,025
1,7
18 PAL
No conversion possible
Conversion is not possible because PAL and G do not belong to the same dimension.
#C2COUNTRY
#Crxmatkl
0base_uom
Base Unit of Measure
0apo_storgu
Stock Level
P001
DE
4711
UNIT
PAL
P001
DE
4712
ROL
PAL
P002
GB
4712
KG
PAL
P025
SE
4713
UNIT
PAL
The unit 0base_uom is entered as the base unit of measure in the master data on the Business Explorer tab page.
The DataStore Object uomcrxkl is also created and activated from the master data ( tab page: Business Explorer Units of Measure for Char. ).
The structure of a DataStore Object always complies with the following rule:
<characteristic> <compounding for characteristic, where applicable>
<target unit of measure> <base unit of measure> <conversion factor: counter>
<conversion factor: denominator> <SID columns for all characteristics not listed in following tablet>
#Crxmatkl
#C2PLANT
#C2COUNTRY
#0UNIT
0base_uom
0uomz1d
0uomn1d
4711
P001
DE
UNIT
25
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 62 of 137
4711
P001
DE
CAR
UNIT
240
4711
P001
DE
BX
UNIT
24
4711
P001
DE
PAL
UNIT
9600
4712
P002
GB
KG
1000
4712
P002
GB
UNIT
KG
4712
P001
DE
TO
ROL
4712
P001
DE
UNIT
ROL
450
4713
P025
SE
CAR
UNIT
4713
P025
SE
BX
UNIT
#C2PLANT
#C2COUNTRY
#0UNIT
0base_uom
0uomz1d
0uomn1d
4711
P001
DE
UNIT
25
4711
P001
DE
CAR
UNIT
240
4711
P001
DE
BX
UNIT
24
4711
P001
DE
PAL
UNIT
9600
4711
P001
DE
CAR
BX
10
The direct reference between CAR and BX is specified in the last row. This entry is incorrect on one hand because the primary key is violated if data record 2
already exists in the object. If record 2 does not already exist, the entry is still incorrect because 0base_uomdoes not contain the base unit of measure from
the master data (UNIT).
2) Master Data cruom1 (can be used to calculate the unit of measure from the unit of measure attribute)
#CrUOM1
#CRF1
0SALES_UNIT
0UNIT
M1
C1
KG
M2
C2
CAR
BX
M3
C3
PAL
UNIT
3) Master Data cruom1KL (can be used to calculate the unit of measure from the measure attribute)
#CrUOM1
#0bp_contper
#CRF1
0SALES_UNIT
0UNIT
M1
Akino
C1
KG
M2
Bertolini
C2
CAR
BX
M3
Smith
C3
PAL
UNIT
Examples
The examples listed below are based on the example data given above. These examples illustrate the results produced by the different options available with
unit conversion:
Examples of Conversion with Fixed Target Unit of Measure
Examples of Conversion Using Factor from InfoObject
Examples of Conversion with Target Unit of Measure Using Attribute in InfoObject
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 63 of 137
Examples of Conversion with Fixed Target Unit of Measure and Dynamic Determination Without Options
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
1,5
1,7
12 CAR
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
1,5
1,7
12 CAR
2880 UNIT
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
1,5
1,7
12 CAR
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
1,5
1,7
12 CAR
120 BX
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
1,5
1,7
12 CAR
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
1,5
1,7
12 CAR
0,3 PAL
Page 64 of 137
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
0,3 PAL
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
20.4 PAL
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
M3
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
0,3 PAL
M3
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
M1
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 65 of 137
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
72000 G
M1
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
M2
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
120 BX
M2
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
M3
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
CRUOM1
4711
P001
DE
0,025
1,7
12 CAR
2880 UNIT
M3
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
18 PAL
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 66 of 137
C2REQnr
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
4320000 G
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
18 PAL
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
172800 UNIT
Example 3: Conversion Factors Determined Dynamically Using Central Units of Measure Only (T006)
Source
C2REQnr
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
18 PAL
Crxmatkl
C2PLANT
C2COUNTRY
C2faCtor
C2faCtorf
C2kyf1
C2kyf2
4711
P001
DE
0,025
1,7
12 CAR
No conversion
possible
Conversion is not possible because PAL and G do not belong to the same dimension.
Integration
The function for making local calculations is available for structural components in the Query Designer in the Selection/Formula Properties dialog, in the BEx
Analyzer in the Key Figure Properties dialog, and in the Context Menu of Web applications.
Features
Local calculations include only those numbers in the calculation that appear in the current view of the report. In this way, you override the standard analytic
engine calculations.
Note that these local calculations only change the display of the values. With subsequent calculations, such as formulas, the system does not use the values
changed for the display, but rather the original values specified by the analytic engine.
For more information, see:
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 67 of 137
Calculate Results As
Calculate Single Values As
Integration
You can use this function with selections in structural components, cells, and restricted key figures.
Features
You can define entire selections as constant for structural components and cells. During navigation, a constant selection is independent of all filters.
In addition, you can define components of selections, that is, individual characteristics and their filter values, as constant. During navigation, only the selection
relating to this characteristic remains unaffected by filters.
You cannot select an entire restricted key figure as a constant, only its characteristics.
Activities
You can activate the Constant Selection setting for an entire selection in the Selection/Formula Properties dialog box.
You can select a characteristic of a selection as constant in the following way:
1. In the context menu of a selection, choose Edit . The Change Selection dialog box appears.
2. In the Selection Details area, in the context menu of the characteristic that you are using in the selection, choose Constant Selection .
Proceed in the same way to select the characteristics that you are using in a restricted key figure or in a selection of a cell as constant.
Examples
Market Index
In a product list ( Product is in the drilldown), you want to display the sales revenue normalized for (based on) a specific product group rather than the total
sales revenue. Using the Constant Selection function, you can select the sales revenue of a specific product group as constant for the drilldown. You can
now relate the sales revenue of the individual products in the product group to the sales revenue of the product group. This allows you to determine the
revenue from each individual product as a proportion of the sales revenue for the product group.
For more information, see Example: Market Index.
Plan/Actual
In the InfoCube, actual values exist for each period. Plan values only exist for the entire year. These are posted in period 12. To compare the PLAN and
ACTUAL values, you have to define a PLAN and an ACTUAL column in the query, restrict PLAN to period 12, and mark this selection as a constant selection.
This means that you always see the plan values, whichever period you are navigating in.
MultiProvider Problems
Furthermore, you can use the constant selection to solve MultiProvider problems.
The MultiProvider has two InfoCubes with data about the price and quantity of various products and the corresponding plants. InfoCube 2 also contains
characteristic Customer. In a drilldown according to Customer, all data from InfoCube 1 is displayed under initial value # (not assigned).
You can now define a constant selection on Customer and exclude initial value # (not assigned) in the filter. This filters out the rows with initial value #
(not assigned), but the data about key figure Price is retained.
For more information, see Example: Using Constant Selection with MultiProviders.
The MultiProvider has an InfoCube with the actual values and an InfoCube with the plan values. If the Calendar Month characteristic is in the plan
InfoCube but not in the actual InfoCube, in a drilldown according to Calendar Month , all data is displayed under value # (not assigned).
You can now define a constant selection for Calendar Month = # (not assigned). In doing so, you select the yearly plan value as constant independently
of the filter and it is displayed in each row, therefore for every month. You can now divide the yearly plan value by 12. Then you can make plan/actual
comparisons on a monthly basis, although there are only yearly values in the plan InfoCube.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 68 of 137
To help understand the concept of constant selection, we recommend that you first compare the Absolute Sales and Constant Sales
(Selection) columns in the two tables. You can then compare the differences between the Constant Sales (Selection) and Constant Sales
(Formula with SUMCT) columns. The two Normalized Sales columns represent typical situations.
Example Table for Market Index
Product Group
Product
Absolute Sales
Constant Sales
(Selection)
Normalized Sales
(Formula)
Constant Sales
(Formula with
SUMCT)
Normalized Sales
(Formula with
SUMCT)
Office supplies
Paper
30
120
25%
120
25%
Envelopes
30
120
25%
120
25%
Ballpoint pens
60
120
50%
120
50%
120
120
100%
240
50%
Chair
60
120
50%
120
50%
Table
60
120
50%
120
50%
120
120
100%
240
50%
240
240
100%
240
100%
Furniture
Overall Result
Product
Absolute Sales
Constant Sales
(Selection)
Normalized Sales
(Formula)
Constant Sales
(Formula with
SUMCT)
Normalized Sales
(Formula with
SUMCT)
Office supplies
Paper
30
120
25%
90
33.3%
Ballpoint pens
60
120
50%
90
66.6%
90
120
75%
210
42.85%
Chair
60
120
50%
120
50%
Table
60
120
50%
120
50%
120
120
100%
210
57.15%
210
240
87.5%
210
100%
Furniture
Overall Result
By removing Envelopes from the drilldown, the result for Office Supplies and the overall result for Absolute Sales is reduced by 30. This is a reduction of
25% with respect to the product group Office Supplies and 12.5% with respect to the overall result.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 69 of 137
InfoCube 1
Plant
Product
Price
101
Candy Tin
0,12
101
Coffee Mug
0,14
102
Candy Tin
0,11
103
Mouse Pad
0,23
104
Post-It Set
0,15
InfoCube 2
Plant
Product
Customer
Quantity
101
Candy Tin
100
101
Coffee Mug
110
102
Candy Tin
105
103
Mouse Pad
Thompson Inc
115
104
Post-It Set
110
Since characteristic Customer is contained in InfoCube 2 but not in InfoCube 1, in a drilldown according to Customer,
under initial value # (not assigned). The query would look as follows on the MultiProvider:
Product
Customer
Price
Quantity
101
Candy Tin
# (not assigned)
0,12
101
Candy Tin
100
101
Coffee Mug
# (not assigned)
0,14
101
Coffee Mug
110
102
Candy Tin
# (not assigned)
0,11
102
Candy Tin
105
103
Mouse Pad
# (not assigned)
0,23
103
Mouse Pad
Thompson Inc
115
104
Post-It Set
# (not assigned)
0,15
104
Post-It Set
110
You can now define a constant selection on Customer and exclude initial value # (not assigned) in the filter. This filters out the rows with initial value # (not
assigned). The data about key figure Price is retained, however.
Procedure
1.
2.
3.
4.
In the Query Designer, define a selection that contains key figure Price and characteristic Customer.
Restrict characteristic Customer to initial value # (not assigned).
Set the constant selection on characteristic Customer.
In the default values of the filter on characteristic Customer, exclude initial value #.
Instead of using the constant selection, you can also solve this MultiProvider problem using InfoSets.
InfoSets are defined in the data model, however, and are very dynamic. Using the constant selection that you set in the Query Designer, you
can set the join of the data records very flexibly to any InfoObject that is part of a selection for each query based on a MultiProvider.
Result
In the drilldown, the rows with initial value # (not assigned) on characteristic Customer are grayed out, but the data about key figure Price is retained. The
query looks like this:
Query Based on the MultiProvider with Constant Selection on Customer
Plant
Product
Customer
Price
Quantity
101
Candy Tin
0,12
100
101
Coffee Mug
0,14
110
102
Candy Tin
0,11
105
103
Mouse Pad
Thompson Inc
0,23
115
104
Post-It Set
0,15
110
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 70 of 137
Prerequisites
To use the RRI in a BEx Query or Web application, you first have to make the necessary settings with sender/receiver assignment.
See Editing Sender/Receiver Assignments to the RRI in the BI System.
If you want to jump from a Web application to a transaction or ABAP/4 report using the RRI, first you need to install an Internet Transaction Server (ITS) for
the target system. The transaction or ABAP report is than displayed in the SAP GUI for HTML, which is part of the ITS. The ITS is also used for jump targets
within the BI server. However, this does not have to be installed separately because it is automatically included in a BI system. The URL for starting a
transaction in the SAP GUI for HTML is generated by the BI server.
Features
Queries, transactions, reports, and Web addresses can be jump targets. The parameterization of the target action is taken from the context of the cell from
which you have jumped. You can set parameters for calling a BEx query or a BEx Web application using input variables that are filled from the selection
conditions and the element definitions of the selected cells in the sender query.
More information about the exact process: Process when Calling the RRI
Example
For your cost center report ( Sender ), you want to request master data from an SAP system ( Receiver ).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 71 of 137
For a BEx Query with the up-to-date sales figures for your customer ( Sender ), you want to request up-to-date stock market data on your customer listed on
the stock exchange from the Internet.
The sender contains the characteristic City in the drilldown, and the receiver contains the corresponding navigation attribute Zip Code . The
restriction of City on the zip code is applied and passed to the navigation attribute Zip Code in the receiver.
Referencing characteristics are mapped to the basic characteristic.
When jumping to another system, note that the mapping rules are created in the target system. Make sure that all the InfoObjects with implicit mapping
rules from the sending query also exist in the target system. Otherwise the jump will not work.
There are some special features when handling the different receivers.
More information: Receivers
Description
Query
Enter the technical name of the sender query or select a query using input help.
InfoCube
If you want to assign the same jump target to all queries for an InfoProvider, enter
the technical name of the required InfoProvider or select it using input help.
3. Choose
Create. The Maintain Sender/Receiver Assignment dialog box appears.
4. Under Report Type , choose a receiver. You have the following options:
Report Type
Description
BEx Query
Jump to a query that was created using the BEx Query Designer
More information: BEx Query As a Receiver
Jump to a BEx Web application that is an executed Web template that was created
using the BEx Web Application Designer. This requires the Java-based runtime of
SAP NetWeaver BI.
Jump to an ABAP-based BEx Web application (SAP BW 3.x) that was created using
the BEx Web Application Designer (version SAP BW 3.x).
Crystal Report
Jump to a formatted report in Crystal Enterprise. You can also use a BEx report for
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 72 of 137
formatted reporting.
InfoSet query
Jump to an InfoSet query (queries on classic InfoSets) InfoSet queries are usually
queries on master data.
Transaction
ABAP Report
Web Address
Jump to any Web address and pass the parameters in the URL.
More information: Creating a Web Address As a Receiver
Jump to any target in the Web or SAP GUI for HTML. The call and the parameters
can be modified using customer-specific coding.
More information:
For more information about the features of the various types, see Receivers.
5. Choose a Target System. You have the following options:
a. Local: The jump target is within the BI system.
b. Source System: The jump target is outside of the BI system.
One source system as a target system:
Specify the name of the source system. You can also choose the source system using input help.
All source systems as target systems:
Choose All Source Systems . Specify the source system in which you want to choose the required report initially.
Log on to this source system.
6. In the Report field, enter a description for the receiver report. Once you have saved your entry, this description is displayed as the Report Title.
7. Choose
Transfer. The Maintain Sender-Receiver Assignment screen appears.
8. Save your entries.
9. For special cases, you can still maintain the assignment details. More information: Maintaining Assignment Detailss.
Information
Report title
Specify a name.
Source system
Choose the required source system using input help. You can assign all source
systems by entering *.
InfoSource
If an InfoProvider is filled from several InfoSources, you can specify from which
InfoSource you want to extract data. In the InfoSource column, choose the
InfoSource you want to use using input help.
If you also want to change the Report Type, Target System or Report settings, choose
appears. Make the required changes and choose
Transfer .
Result
Jump targets that have been assigned to a BEx query can be selected in Web applications and in the BEx Analyzer. You access them from the context menu
under the Goto function.
More Information:
BEx Analyzer: Goto
Web Applications: Goto
1.1.9.3 Receivers
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 73 of 137
Features
In the following, we will explain how to deal with the peculiarities of the various receivers.
BEx Query
When you call the RRI, the selections are passed as described: Process when Calling the RRI
More information: BEx Query As a Receiver.
Web Application
For Web applications, the same applies as for BEx Queries. If a Web application contains multiple queries, the RRI is called for each query separately.
Crystal Report
When calling the RRI with a Crystal Report as the receiver, only the variables are filled. There is no transfer of filters as with BEx Queries.
Transaction and ABAP/4 Report
Calling the RRI with a transaction or an ABAP/4 Report as the receiver is done with the RRI from the SAP NetWeaver Application Server. This is possible in
an ERP system, a CRM system or within the BI system. The selections are prepared by the BI system that does not recognize the transaction or the report.
The assignment is transferred from the RRI of the SAP NetWeaver Application Server using inverse transformation rules. There must also be a complete chain
from the DataSource of the source system to the InfoSource, through transformations up to the InfoProvider. This does not mean that data absolutely has to
be loaded using this chain. If this chain does not exist, the RRI cannot transfer the selections to the source system.
You can only call the RRI for fields with dictionary reference. This means that the parameter has to be
PARAMETERS param LIKE <table_field>
for ABAP reports. For transactions, this means that the screen has to have a dictionary reference. Not every transaction can be called with the RRI of the
SAP Application Server. For some transactions (such as SV03), you need to program a utility program if you still want to call it using the RRI.
See also Creating a Transaction As a Receiver.
InfoSet query
The same applies to InfoSet queries as does for transactions and ABAP/4 reports.
Web Address
When calling the RRI with a Web address as receiver, the assignment details have to be maintained. You have to specify the name of the input field in the
field name column. URL variables cannot be used.
See also Creating a Web Address As a Receiver.
Features
In order to be able to call a query as a recipient with the RRI you need to be aware of a few guidelines during query definition.
General
Characteristics that are to be filled from the sender query must be defined as free characteristics. A hierarchy node restriction can also be transferred to
free characteristics as a property, for example.
Changeable variables for the recipient query are not filled by the RRI.
Selections for various InfoObjects are transferred when the InfoObjects have the same reference characteristic.
See also the section BEx Query under Receivers.
Using Hierarchies
When using hierarchies in the query, you should be aware of the following cases:
Sender and receiver queries use the same hierarchy or hierarchies that are based on the same basic characteristic.
Jumping from one node in the hierarchy of the sender query to the same node of the hierarchy in the receiver query works as usual. The hierarchy settings are
transferred with the RRI.
Sender and receiver queries use different hierarchies:
The hierarchy setting for the receiver query remains unchanged and the selections for the RRI are deleted. The system filters by leaves in the node.
A few InfoObjects are different from one another, but they are treated as assignable by the system. This is a special development for Business
Content. For example, values are transferred from account number (0ACCOUNT) to cost element (0COSTELMNT) or to general ledger account
(0GL_ACCOUNT). For more information, see the Example for a BEx Query As a Receiver.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 74 of 137
The receiver query uses a hierarchy, but the sender query does not:
If the selections of the RRI only consist of single values, the hierarchy setting for the receiver query remains unchanged; otherwise the hierarchy is
deactivated.
The sender query uses a hierarchy, but the receiver query does not:
The hierarchy is set to inactive.
Hierarchies and Compounded Characteristics
With hierarchies for compounded characteristics, you have to create a variable for the basic characteristic of the hierarchy for the receiver query so that the
values can be transferred correctly.
If you were to define the query without these variables, the dynamic filter would be used and the hierarchy would be deactivated due to the compounding of the
InfoObject. When you use this non-changeable variable, the RRI can transfer the value to this variable and the hierarchy remains active.
Example
As a special case with different hierarchies and compounded characteristic, see the example Example for BEx Query as Receiver.
Hierarchies that are based on different characteristics cannot normally be transferred. However this a special case: The characteristics cost
element (0COSTELMNT) and account number (0ACCOUNT) have the same key and are recognized by the system as similar characteristics
whose values can be transferred to one another. The same link exists between account number (0ACCOUNT) and general ledger account
(0GL_ACCOUNT). If this was not the case, the hierarchy in this example could not be transferred with the RRI.
Defining your receiver query:
As the hierarchy basic characteristic is a compounded characteristic (account number is compounded to chart of accounts), you have to create a variable for
characteristic account number so that the values can be transferred correctly.
If you define the query without this variable, the dynamic filter will be used and the hierarchy will be deactivated because the InfoObject is
compounded.
By using this variable, which cannot be changed, the RRI can transfer the value to this variable and the display hierarchy remains active.
Create a variable with the following properties for characteristic account number:
Variable type
Processing type
Variable represents
Selection option
Variable value is
Optional
Switched on
Switched off
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 75 of 137
When the hierarchies for both queries are structured in the same way, the jump from the sender query to the receiver query appears as follows:
Prerequisites
If you want to jump from a Web application to a transaction or ABAP/4 report using the RRI, an ITS for the target system has to be assigned beforehand.
The value of the input field to be supplied must be known when the jump is made (for example, by entering a single value on the selection screen of the
sender or by the cursor position at the time of the jump).
Sender and receiver fields that correspond to one another generally must link to the same data element or at least to the same domain, otherwise the
values cannot be assigned to one another.
The assignment of sender and receiver fields must always be a 1:1 assignment. For example, the transactions called from the start screen cannot have
two input fields of the same data type. Then it is not clear which of the fields is to be supplied, which means neither of them is supplied.
There has to be a complete chain from the DataSource of the source system to the InfoSource, through update rules up to the target. See also the section
Transaction and ABAP/4 Report in Receivers.
Procedure
Simple Cases:
1.
2.
3.
4.
5. Choose
Complex Cases:
For some transactions, it is necessary to make a detailed assignment. One reason for this can be that the transaction uses a hidden initial screen and does
not fill the parameter using the memory ID of the data element.
Proceed as follows after you have created the sender-receiver assignment as described above.
1. Select your sender-receiver assignment and choose
Assignment Details. See also Maintain Assignment Details.
2. As the type, choose Table Field . The columns Field Name, Data Element, Domain and SET/GET Parameter become input ready.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 76 of 137
3. Specify the field name, data element, domain and parameter ID for the receiver transaction. You need to know this information because no input help is
available. You can usually find the parameter ID in the ABAP dictionary entry for the data element.
If the RRI Jump Still Does Not Work:
Many transactions and programs are not prepared for a call with parameters from the RRI, for example, because programs with additional screens are called in
transactions and the sender and target fields are not compatible.
In this case a custom program that has all the necessary parameters and tables as selections fields and that then calls the actual transaction with the ABAP
command CALL TRANSACTION or calls the desired program with SUBMIT can help.
Also refer to the documentation about the ABAP commands and SAP Notes 363203 and 694244 (as an example for a jump to transaction KSB5) and SAP
Note 383077 (RRI: Transaction call fails).
Individual fields must be declared as PARAMETERS, and tables must be declared as TABLES. You can only jump to tab pages from within such custom
developments.
If you jump from a node of a BEx query, the hierarchy is expanded before being passed to the target program or the target transaction in the leaves of the
parent node. In this scenario, a list of values is always passed. It is not possible to pass the node name itself to a transaction or program.
Prerequisites
It may be necessary to insert an InfoObject that contains the value to be transferred into your sender query. See also the Examples for Jumping to Web
Pages.
Procedure
1. You are on the Maintain Sender-Receiver Assignments screen. Specify a query or an InfoProvider as sender and choose
Create .
2. Choose Web Address as the report type for the receiver.
3. Enter the required Web address for the receiver report using input hlep. This Web address has to link directly to the input field. Examples for Jumping to
Web Pages explains how to determine this.
If this Web page changes, you need to change your sender-receiver assignment accordingly.
4. Choose
Result
You can call the Web address from your query or Web application with the associated search term using Jump . The search parameter is then filled with the
key for the associated InfoObject and the search results are displayed on the Web page.
Example
See Examples for Jumping to Web Pages.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 77 of 137
Query Definition:
Add the characteristic Stock Search Term to your query. It is a navigation attribute for Customer . This characteristic must appear in the drilldown in the
query. It is passed as a parameter when the RRI is called. From the properties of the characteristic, define that the characteristic should not be displayed.
The characteristic Stock Search Term must be defined so that the key of the characteristic passes the precise search term.
The length of the characteristic must be defined with a length of three places; otherwise the other preceding places will be filled with zeroes.
Determining the Web Address and the Field Name:
To jump to the search term on the Web page, you need the Web address of the page to which the input field for the stock search term refers as well as the
name of the input field.
You have two options:
1. You look at the HTML code of the Web page:
This is the corresponding part of the source code of the Web page CNN Money; the relevant parameters are highlighted in red. The tag form is followed
by the URL that sends the data; the tag input is followed by the attribute name , which gives the name of the expected parameter:
<form action="http://quote.money.cnn.com/quote/quote" method="get">
<td width="60" valign="bottom">
<img src="http://i.cnn.net/money/images/searchbar/enter_symbol.gif" alt="" width="49" height="16" hspace="3" vspace="0" border="0"></td>
<td width="55" valign="bottom">
<input type="text" name="symbols" value="" size="5" maxlength="38" style="font-size: 11px"></td>
...
2. Execute the action on the Web page with a stock search term. On the Web page, enter for example SAP in the search field and confirm it. Copy the URL
of the new window.
http://quote.money.cnn.com/quote/quote?symbols=SAP
The query string appears after the question mark. All the parameters name=value are listed here; individual parameters are separated with &.
Sender/Receiver Assignments:
Proceed as in Creating a Web Address as a Receiver. Specify the Web address you determined beforehand http://quote.money.cnn.com/quote/quote.
In the assignment details for the InfoObject Stock Search Term, enter the field name SYMBOLS that you determined beforehand. You can set the indicator
Required Entry so that the Web page is only called if this stock search term is found on the Web page .
Give the report the title CNN Money.
Jump to Google
You want to search for information about a customer in a query in the Internet. To do so, jump to the customer on the Web page of the search engine Google
using the context menu.
Query Definition:
Define your query as described above with a characteristic that has the search term as key.
Sender/Receiver Assignments:
Proceed as in Creating a Web Address as a Receiver. Determine the Web address that refers to the input field for the search as described above. In this case
it is: http://www.google.de/search?. Give the report the title Google.
In the assignment details, enter q as field name. This is the name of the input field that you determined beforehand as described above. When the RRI is
called, the key of the corresponding InfoObject is passed to the Web page.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 78 of 137
Procedure
To implement a specific enhancement:
1. Create an ABAP class with the interface IF_RS_BBS_BADI_HANDLER.
More information: Creating ABAP Classes with Interface IF_RS_BBS_BADI_HANDLER
2. Create an implementation to the classic BAdI RS_BBS_BADI.
More information: Creating an Implementation to the BAdI RS_BBS_BADI
3. Create the sender-receiver assignment.
More information: Creating Sender-Receiver Assignments
Result
You can use the new jump target in the executed query.
Example
The procedures are based on the following example:
A query has the characteristic Product in its rows and the two key figures Revenue und Quantity in its columns. A further characteristic Customer is one
of the free characteristics. Upon jumping, the new report type should cause the free characteristic Customer to be drilled down in the rows. At the same time,
a key figure is filtered.
This is shown in the first part of the example.
For more information about URL parameters used, see
The UID of the key figure is visible in the BEx Query Designer in the properties of the key figure on the tab page Enhanced .
The second part of the example shows how a key value can be passed to the Google search. You can also pass a key value to Google without this
implementation, but with the enhancement it is possible to also pass the text. The text, however, must first be determined from the master data.
For more information about the solution without an implementation, refer to
In general, you should be able to use the RRI to create the sender-receiver assignment, without having to specify any other assignment
details.
In certain cases, however, it may be necessary to maintain these assignment details, such as
If you do not want to transfer certain selections, for example differing date information
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 79 of 137
If an assignment is not clear, for example, if the vendor needs to be assigned to the purchaser.
The system does not check the assignment details you maintained to make sure they make sense.
Prerequisites
You have created a sender-receiver assignment.
Procedure
1. Select your sender-receiver assignment in the Receiver table.
2. Choose
Assignment Details. The Field Assignments dialog box appears.
3. If you want to make changes to the individual fields, choose the required settings using input help. You can assign the processing method ( type ) of
selections for characteristics and the permitted Selection Type , as well as designate the field as Required Entry Field .
You can choose between the following input options:
Processing Method ( Type )
Effects
Generic (default)
V Variable
Selections are transferred directly to the specified variables. In the Field Name
column, you have to enter the technical name of the variables. The Data Element ,
Domain and Parameter ID columns are automatically filled from the properties of
the variables.
This processing method is only applicable for BEx Queries. This is recommended if
you want to fill a variable in the receiver query and that variable has no technical
connection to the InfoObject of the sender query.
I InfoObject
Selections are transferred directly to the specified characteristic. In the Field Name
column, you have to enter the technical name of the characteristic. The Data
Element , Domain and Parameter ID columns are automatically filled from the
properties of the characteristic.
This processing method is only applicable for BEx Queries. It is useful when an
assignment is not unique and a specific characteristic is to be transferred explicitly.
The characteristics assigned to one another have to have the same (noncompounded) key. For example, the characteristics 0MATERIAL and 0MATPLANT
can be assigned to one another.
3 Table field
Selections are transferred directly to the specified field. This setting is only useful for
non-BI jump targets. The Field Name, Data Element and Domain columns have
to be filled correctly. It also makes sense to fill the column Parameter ID with the
correct parameter ID. You can usually find the parameter ID in the ABAP dictionary
entry for the data element.
See also Creating a Transaction As a Receiver.
P URL parameters
This setting is only useful for the Web Address jump target. Specification of a field
name is then mandatory.
See also Creating a Web Address As a Receiver.
Delete X
All selections for this characteristic are deleted and not transferred to the jump
target.
This setting is useful when you do not want to transfer certain selections. For
example, you may not want selections for a characteristic to be transferred to a
characteristic that has the same reference characteristic.
When using these explicit rules, make sure that the assignment is already made in the sender system. If you jump to another system, note
that the sending and receiving InfoObjects of the assignment must exist in the sender system. Otherwise the jump will not work.
Choosing a selection type is worthwhile when the jump target is a longer-running query or transaction. When you choose a selection type with the
Required Entry indicator, during the jump you can prevent a report that was called from starting if it does not fulfill certain conditions. In this way, you
avoid putting unnecessary load on the system. Before the jump, the system checks whether the selection that is marked as a required entry is present in
the jump target; otherwise the jump is not executed.
For example, for a jump to an ERP system in the transaction MM03, you can mark the InfoObject Characteristic with selection type P
Parameter as a required entry field. The jump is only executed when the InfoObject Material is found in the ERP system.
Permitted Selection Type
Effects
* (default)
No restriction of the selection type. Single values, intervals, free selection options
and hierarchy nodes can be transferred.
P Parameters
E Individual values
I Interval
S Selection option
You can choose single values, intervals and free selection options (such as >, <, <>,
). Hierarchy nodes are expanded in lists of single values.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 80 of 137
H Hierarchy nodes
When the system calls up the recipient, the settings made in the Field Assignments dialog box are set. The system proposes all other field assignments
generically.
4. Choose Close. The assignment details you defined are saved and are taken into account when the jump target is called.
Prerequisites
You are authorized to use transaction RSFC .
Features
SAP DemoContent for Features is delivered with the technical content. You use transaction RSFC to call SAP DemoContent for Features.
Each example scenario in SAP DemoContent for Features contains predefined InfoProviders and queries. You have to activate the scenario in question to be
able to execute queries on the InfoProviders.
The activation causes the system to carry out the following steps:
The delivery version (D version) of the object is transferred into the active version (A version)
Master data delivered with the scenario (attributes, texts, and hierarchies) and transaction data are loaded to the InfoProviders
For each example scenario, you can use the Info function to call documentation containing further information on the business background for the scenario,
the OLAP function, and the objects to use.
There are example scenarios for the following OLAP functions:
Constant Selection
Slow-Moving Item Report
Tempory Hierarchy Join
Exception Aggregation
Conditions
Elimination of Internal Business Volume
Local Aggregation
Variables
Virtual Time Hierarchy
The namespace of objects always begins with the character string 0D_FC.
Features
In the following sections, special BI functions for performance optimization are described in detail:
Page 81 of 137
Purpose
SAP NetWeaver BI Accelerator allows you to improve the performance of BI queries reading data from InfoCubes. The system makes the data of a BI
InfoCube available as a BI accelerator index in a compressed but not aggregated form.
BI accelerator is particularly useful in cases where relational aggregates (see Performance Optimization with Aggregates) or other BI-specific methods of
improving performance (such as database indexes) are not sufficient, are too complex, or have other disadvantages.
For example, if you have to maintain a large number of aggregates for one particular InfoCube, you can use the BI accelerator to avoid this
high maintenance effort. Unlike performance optimization with aggregates, there is only one BI accelerator index for each InfoCube. As with
performance optimization with aggregates, you do not have to make any decisions regarding modeling for the BI accelerator index.
Implementation Considerations
The BI accelerator is based on TREX technology. To use the BI accelerator, you need an installation based on 64-bit architecture. Hardware partners deliver
this variant in preconfigured form as the BI accelerator box . Note that you cannot use a TREX installation configured for searching in metadata and
documents with the BI accelerator since TREX installations are based on 32-bit architecture. Equally, you cannot use a BI accelerator box for searching in
metadata and documents. If you want to use the search function as well as the BI accelerator, you require two separate installations.
OLAP cache
BI Accelerator index
Relational aggregates from the database
InfoCubes from the database
If an active BI accelerator index exists, the OLAP processor always accesses this BI accelerator
index and not the relational aggregates. Therefore, with regard to modeling, we recommend that
you create either relational aggregates or a BI accelerator index for an InfoCube.
Query Execution
When the query is executed, it is clear to the user whether data is being read from an aggregate, a BI accelerator index, or an InfoCube.
In the maintenance transaction, you can deactivate a BI accelerator index on a temporary basis to test it for performance purposes or to analyze data
consistency.
You can also execute the relevant query in the query monitor (transaction RSRT) using a corresponding debug option: Choose
Debug Options dialog box, choose Do Not Use BI Accelerator Index to execute the query with aggregates or an InfoCube.
Use
BI accelerator enables quick access to any data in the InfoCube with low administration effort and is especially useful for sophisticated scenarios with
unpredictable query types, high volumes of data and a high frequency of queries.
Structure
BI Accelerator index
A BI accelerator index contains all the data of a BI InfoCube in a compressed but not aggregated form. The BI accelerator index stores the data at the same
level of granularity as the InfoCube.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 82 of 137
It consists of several, possibly split indexes that correspond to the tables of the enhanced star schema and a logical index which, depending on the definition
of the star schema, contains the metadata of the BI accelerator index.
BI Accelerator Server
The BI accelerator server is a TREX system as an installation of a BI accelerator engine. The data of the BI InfoCube is kept and processed entirely in the
main memory of the BI accelerator server.
The BI accelerator engine is the part of the analytics engine that manages the BI accelerator index. This software allows the system to read
data from the BI accelerator index, add data to the BI accelerator index, or change data. The BI accelerator optimizer is the part of the BI
accelerator engine that ensures the best possible read access to a BI accelerator index. More information: Technical Information About the
SAP NetWeaver BI Accelerator Engine.
Integration
Maintenance Processes for BI Accelerator Indexes
With the BIA index maintenance wizard you can create, activate, fill and delete BI accelerator indexes.
Like relational aggregates, a BI accelerator index is a redundant downstream data source that is used to improve query performance. For this reason, hierarchy
and change run processes and processes for rolling up data are derived from aggregate maintenance. More information: Rolling Up Data in SAP NetWeaver BI
Accelerator Indexes and System Response Upon Changes to Data: SAP NetWeaver BI Accelerator Index.
BI Accelerator Index as InfoProvider for Reporting
At query runtime, analytical engine functions such as aggregation, filtering, selection and some cell-based sorts are performed on the BI accelerator server.
For example, if a column has a thousand rows and some of the cells contain long texts, efficiency is significantly increased by using a ten-bit
binary number to identify the texts during processing and a dictionary to call them again afterwards. The datasets that have to be transferred
and temporarily stored during the different processing steps are reduced on average by a factor of ten.
This means that you can perform the entire query processing in the main memory and reduce network traffic between separate landscapes.
Divided (Split) Indexes
The BI accelerator engine can process huge datasets, without exceeding the limits of the installed memory architecture. You can split large tables (fact tables
and large X and Y tables) horizontally, save them on different servers and process them quickly in parallel. The maximum table size before the system splits
the index depends on the existing hardware of the BI accelerator server. Data is distributed to the subindexes in a round-robin procedure. Write, optimize and
read accesses are parallelized on the BI accelerator server.
This scalability allows users to make use of sophisticated adaptive computing infrastructures such as blade servers and grid computing.
Index Types
The following index types are available:
Normal: In standard cases, the system creates BI accelerator indexes on the BI accelerator server for all the tables in the InfoCube star schema.
Flat: An exception arises if the InfoCube star schema has been deconstructed because, for example, one (or more) dimension tables have got very large
(> 20% of the InfoCube). In this case, the system does not create dimension tables but de-normalizes the appropriate part of the InfoCube star schema
(fact and dimension tables).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 83 of 137
Meaning
Exit Maintenance
Continue
In the center area of the screen, you find the following tab pages:
On the
Information tab page, you find additional information about the step.
On the
Messages tab page, the system displays information about the current status.
On the
Index Information tab page, the system displays the tables or indexes of the BI accelerator index and its properties (see SAP NetWeaver BI
Accelerator Index Designsap).
In this area of the screen, you find the following function keys:
Function Key
Application Logs
Meaning
The Log Selection dialog box appears. You choose the processes for which you
want to display the log. You can choose from the following processes:
Initial Filling
Roll Up
Compress InfoCube
Delete Request
Change Run
Check
Choose
BIA Monitor
The BI Accelerator Monitor screen appears (see Using the SAP NetWeaver BI
Accelerator Monitor).
If a BI accelerator index is available, the Maintain BIA Index Properties dialog box
opens .
You can specify the following settings:
Always store BIA index data completely in the main memory. This setting
is advisable if enough main memory is available, you constantly require
optimum response times, and the index is used frequently (see also
Checking SAP NetWeaver BI Accelerator Indexes (Transaction RSRV),
test Load BIA Index Data into Main Memory ).
Change the status of the BIA index: Active or I nactive (see scenario
2).
Further information about the index is also provided: Last Changed By , Date and
Time of last change, Index Type (see Technical Information About SAP
NetWeaver BI Accelerator Engine).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 84 of 137
Prerequisites
Communication between the BI system and the BI accelerator server takes place using RFC modules. To connect a BI accelerator server to the BI system,
you must make the following settings:
Set up the RFC destination for the BI accelerator server (transaction SM59). For more information, see Customizing under SAP Customizing
Implementation Guide SAP NetWeaver Business Intelligence Connectivity of TREX Creation of RFC Destination in BI System .
Specify the RFC destination for the BI accelerator server (transaction RSADMIN). The RFC BI Accelerator parameter has to correspond to the above
RFC destination.
Process Flow
Access from Data Warehousing Workbench
1. You are in the Data Warehousing Workbench in the Modeling functional area. In the navigation window, choose InfoProvider . In the InfoProvider tree,
navigate to the InfoCube with the queries you want to optimize using the BI accelerator index.
2. In the context menu of the InfoCube , choose Maintain BI Accelerator Index . The first dialog box for the BIA index maintenance wizard appears.
Access from Transaction RSDDV
1. On the Aggregate/BI Accelerator Index: Select InfoCube screen (transaction RSDDV), select the required InfoCube.
2. Choose
BIA Index . The first dialog box for the BIA index maintenance wizard appears.
Scenario 1
You call the BIA index wizard for an InfoCube that does not yet have a BI accelerator index.
Step 1: Creating a BI accelerator index
When you execute this step, the system creates the indexes for the tables of the InfoCube star schema on the BI accelerator server, as long as they have not
already been created by other BI accelerator indexes. These tables consist of the fact and dimension tables of the InfoCube as well as the master data tables
that contain the required SIDs, the S, X, and Y tables of the InfoObjects. A "logical index" is also created. This contains the metadata of the BI accelerator.
Finally, the system activates the BI accelerator index.
If the aggregate was filled successfully, the status in the Object Version column on the
This step may take a few minutes if the individual tables are very large and have split indexes on the BI accelerator server. The more parts
into which the index is being split, the longer the duration of the activation step. For more information about split indexes, see Technical
Information About the SAP NetWeaver BI Accelerator Engine.
To use the BI accelerator index in reporting, you have to fill it with data. To schedule a background job to fill the BI accelerator index, choose Continue .
Step 2: Filling a BI accelerator index
The dialog box for specifying the Start Time appears. Specify when you want the fill job (RSDDTREX_AGGREGATES_FILL) to run in background processing
and choose
When you execute this step, the system starts a process in the background that reads the data in the tables of the InfoCube star schema from the database
and writes them to the corresponding indexes on the BI accelerator server. If the index of a master data table (S/X/Y tables) has already been created and
filled by another BI accelerator index, only those records that have been subsequently added have to be indexed (read mode/fill mode "D" during indexing).
If the aggregate was filled successfully, the status in the Object Status column on the
Reading the data from the database and writing the data to the BI accelerator server can be performed in parallel in the BI system in different
ways. To do this, maintain the system parameters in the BI accelerator monitor.
For more information about the steps for creating and filling a BI accelerator index, see Activating and Filling SAP NetWeaver BI Accelerator Indexes.
Step 3: Completing BI accelerator index maintenance
After the BI accelerator has been filled, you can choose
Cancel to return to the source transaction or
accelerator index maintenance. The BI accelerator index is available and can be used for queries.
Scenario 2
You call the BIA index wizard for an InfoCube that has a BI accelerator index that is already filled with data.
Step 1: Deleting a BI accelerator index
Since an active and filled BI accelerator index that can be used for reporting is already available, you can either temporarily deactivate it or delete it at this
point. This can be useful if you want to ensure, for performance purposes or analysis of data consistency, that the system is not using a BI accelerator index.
To delete the BI accelerator index, choose
Continue .
The system deletes the definition and the settings of the BI accelerator index in the BI system and the logical index (metadata) and all indexes
for the tables of the enhanced star schema of the InfoCube on the BI accelerator server. The only exceptions are the indexes for the master
data tables that are still being used by other BI accelerator indexes.
To deactivate the BI accelerator index temporarily, choose
Inactive as the status of the BI accelerator index and choose
BIA Index Properties . The BI Accelerator Index Properties dialog box appears. Choose
.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 85 of 137
A BI accelerator that is switched off is not used when a query is executed. Since BI accelerator indexes that are switched off must also be
consistent, you do not have to activate the BI accelerator index again or fill it when you switch it back on.
Scenario 3
You call the BIA index wizard for an InfoCube that already has an active BI accelerator index, but has not yet been filled or completely filled with data. The full
process is either terminated or not even started.
Step 1: Deleting or continuing to fill a BI accelerator index
Since an active BI accelerator index that can be used for reporting is already available, you can either continue to fill it with data or delete it at this point. You
can see the status of the individual indexes in the Messages from Previous Step area of the screen.
To fill the BI accelerator index, choose Continue Filling .
To delete the BI accelerator index, choose Delete .
For more information about the global indexing parameters, see Global Parameters for Indexing.
Additional technical information about these processes is provided in the following documentation.
Process Flow
Indexing Process by Table/Index on the BI Accelerator Server
The system performs the following steps in order to create an index on the BI accelerator server and make the data visible.
The name of the index is generated from the System ID and Table Name : <<system ID>>_<<table name>>. The system deletes the first
forward slash from the table name and replaces the second with a colon.
Create: For a table, the system creates the index on the BI accelerator server in accordance with the table properties. The system also determines how
many parts the index is to be split into, depending on the present size of the table.
Index: The data is transferred and written to a temporary file on the BI accelerator server.
Prepare optimize: The data in the temporary file is formatted (compressed, coded and so on) as required for search and aggregation. Depending on how
the index is distributed, this step can take longer than the indexing step.
Commit optimize: The previously optimized data is made visible. If you perform rollback for an index, the system rolls back the data to the last commit
optimize.
The logs for the initial fill/indexing of a BI accelerator index are in the application log under object RSDDTREX, subobject TAGGRFILL.
Competing Processes During Indexing
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 86 of 137
You can activate and fill BI accelerator indexes for different InfoCubes simultaneously.
However, overlaps may occur if several indexing jobs try to index the same master data tables simultaneously. In this case, the first job locks the table and
performs indexing. The other jobs see the lock and schedule the indexing run to take place later. If no new data is loaded in the meantime, the system simply
checks that indexing was performed successfully by the competing job. This step is necessary to avoid the system setting a BI accelerator index to active
when the index is not actually available on the BI accelerator server because the job was terminated.
The subsequent jobs try a total of five times to start the indexing process or determine the status of the index. If this is not possible due to a long-running
process or termination, the system terminates the entire indexing process for the BI accelerator index and notes the InfoCube affected by the lock process.
You have to wait until the current program has finished or the error has been fixed before restarting the indexing process.
Example: Log for initial indexing with competing processes
Load to index for table '/BI0/SVC_PAYM2' locked by competing job
InfoCube of competing process: 'ZBWVC_003'
Lock for table '/BI0/SVC_PAYM2'. Job will be restarted later
...
No new data for index of table '/BI0/SVC_PAYM2'
BI accelerator index for InfoCube '0BWVC_003' filled successfully
Integration
If you replace the relational aggregates in an InfoCube with a BI accelerator index, you do not have to make further changes in the process chains or other
settings. The process and the associated programs are identical.
The compression of data packages after rollup, as performed with aggregates to improve efficiency (see Efficiently Loading Data to Aggregates in section
Setting Automatic Compression ), does not apply to BI accelerator indexes because the data on the BI accelerator server already exists in a readoptimized format. However, it is useful to rebuild the BI accelerator index if the InfoCube is compressed heavily after rollup (see System Response Upon
Changes to Data: SAP NetWeaver BI Accelerator Index in section Compression ).
You can use delta indexes to speed up the rollup process. For information about optimizing the performance of BI accelerator indexes that are used
particularly frequently, see Improving Efficiency Using BI accelerator Delta Indexes.
Prerequisites
New data packages (requests) have been loaded into an InfoCube.
BI accelerator indexes for this InfoCube have been activated and filled with data.
Features
When you rollup data for an InfoCube, the system first loads the new data into any aggregates that exist in the InfoCube, and then determines the delta of the
missing records for all the tables that have an index in the BI accelerator index of the InfoCube and indexes it. If new SIDs are generated when transaction
data is loaded, the system also writes new records to the indexes of the S, X and Y tables. When the system has indexed all the indexes successfully, the
data of the most recent request is released for reporting.
Activities
As with relational aggregates, you only have to exit data rollup after loading transaction data.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 87 of 137
For more information about the different execution modes for this activity, in particular the recommended execution types Including Rollup of
Data Packages As a Process in a Process Chain and Starting Rollup of Data Packages Manually , see Rolling Up Data in Aggregates
In InfoCube administration, where you can see whether a rollup is missing, running or successful, the system does not differentiate between whether the
InfoCube has aggregates or a BI accelerator index.
Features
Hierarchy/Attribute Change Run
Since the data in master data tables (X and Y tables) is stored in indexes on the BI accelerator server, BI accelerator indexes, like aggregates, are affected
by changes to master data. However, in contrast to aggregates, the fact tables do not contain the current data for the master data. Therefore, you do not have
to run the potentially time-consuming delta calculations that you have to run for aggregates. Instead, you only transfer the changed records from the master
data tables and change them in the indexes on the BI accelerator server. In most cases, this is considerably quicker than modifying aggregates.
Since the hierarchy tables are not in the BI accelerator index either, there is no pre aggregation on specific hierarchy levels, as is the case with aggregates.
Again, calculation and modification is unnecessary. However, as with the BI hierarchy buffer, some views of hierarchies that occur in queries are stored on the
BI accelerator server as temporary indexes so that they can be reused. If the hierarchy is changed, these temporary indexes have to be deleted.
The system changes both the master data and the temporary hierarchy indexes during the hierarchy/attribute change run. In this process, the aggregates and
BI accelerator indexes for the relevant objects are determined for the previously changed InfoObjects that are selected. As before, the system first modifies
the aggregates in accordance with the changes (see System Response Upon Changes to Master Data and Hierarchies) and then runs the two quick processes
described for the relevant BI accelerator indexes:
The X and Y indexes are filled with the changed records.
The hierarchy buffer is deleted from the BI accelerator index.
Finally, the system activates the master data and displays the changed aggregates and BI accelerator indexes with the new data for reporting.
Compression
With BI accelerator indexes you do not have to compress after rolling up data packages. The data on the BI accelerator server already exists in a readoptimized format.
However, in the following cases it may be useful to rebuild the BI accelerator index, although this is not strictly necessary.
A BI accelerator index is created for an InfoCube that is not aggregated, or a large number of data packages are later loaded to this InfoCube. If you compress
this InfoCube, more data is contained in the BI accelerator index than in the InfoCube itself and the data in the BI accelerator index is more granular. If
compression results in a large aggregation factor (>1.5), it may be useful to rebuild the BI accelerator index. This ensures that the dataset is reduced in the BI
accelerator index too.
Non-cumulative InfoCubes, that is InfoCubes with at least one non-cumulative key figure, should still be reconstructed in large intervals after compression. We
recommend this especially if the time to calculate the markers at query runtime is large.
Deleting Data
If you delete data from the InfoCube selectively, the BI accelerator index has to be rebuilt. When you execute selective deletion, the system automatically
deletes the affected BI accelerator index.
When you delete a data package (that is not aggregated) from an InfoCube, the index for the package dimension table is deleted and rebuilt. The facts in the
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 88 of 137
fact index remain but are hidden because they are no longer referenced by an entry in the package dimension table. Therefore, more entries exist in the index
than in the table of the InfoCube. If you regularly delete data packages, the number of unused records increases, increasing memory consumption. This can
have a negative affect on performance. In this case you should consider rebuilding the BI accelerator index regularly.
As read performance deteriorates the larger the delta index gets, we recommend that you only switch on the delta index for essential indexes
such as fact indexes and X/Y indexes. This improves performance when you modify data after a hierarchy or attribute change run.
Integration
We recommend that you regularly merge the delta indexes with your main index so that read performance is not negatively affected. You can do this in several
ways:
On the Analysis and Repair of BI Objects screen (transaction RSRV), area BI Accelerator BI Accelerator Performance , you can select the Size
of Delta Index elementary test. You can choose
Correct Error to access repair mode and then execute a MERGE for the indexes. For more
information about analyzing BI accelerator indexes in the analysis and repair environment, see Checking SAP NetWeaver BI Accelerator Indexes
(Transaction RSRV).
You can schedule program RSDDTREX_DELTAINDEX_MERGE.
Activities
To set the delta index for a BI accelerator index, on the BI Accelerator Monitor screen (see Using the SAP NetWeaver BI Accelerator Monitor), choose BI
Accelerator Index Information Set Delta Index. The Delta Index Properties dialog box appears.
Switching On the Delta Index
In the Delta Index column, set the corresponding indicator if you want the table to use a delta index.
The new setting takes effect with the next delta indexing operation.
Switching Off the Delta Index
You reset the setting for the delta index in the same way.
Before the next indexing operation, the system merges the delta index and the main index. If the delta index is already very large, the next process may take
longer.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 89 of 137
requests for an InfoCube with a BI accelerator index to the database. The system tries to use aggregates for this InfoCube, if they are available.
After 30 minutes, if the problem is not resolved or if the time stamp entry is not deleted or changed, the system directs query requests to the BI
accelerator server again. If the problem still exists, the system writes a new time stamp and redirects queries to the database again for the next 30
minutes.
As soon as the BI accelerator is available, the system automatically sends all queries to the BI accelerator index of the affected InfoCube.
In the BI accelerator monitor, it is not possible to maintain the indexes on the logical level of the BI accelerator InfoProvider. Use BI
accelerator index maintenance instead (see Using the BIA Index Maintenance Wizard).
Queries work with the accelerator component of the BI accelerator, and the BI accelerator monitor works with the alert server. The accelerator
components and the alert server are independent services within the BI accelerator. For this reason, queries can run without errors while at the
same time, the BI accelerator monitor displays an error in the alert server.
Integration
Access from Data Warehousing Workbench
You are in the Administration functional area of the Data Warehousing Workbench. In the navigation pane, choose Monitors
The BI accelerator monitor is displayed.
BI Accelerator Monitor .
You can find more information about the connection of the BI accelerator monitor to the CCMS monitoring framework in SAP Note 970771.
More information about the CCMS monitoring framework:
Alert Monitor
Prerequisites
The BI accelerator is based on TREX technology. For more information, see Performance Optimization with SAP NetWeaver BI Accelerator.
Features
BIA Results of Check Screen Area
Test Results area
Here the system displays the results of the BI accelerator consistency checks.
Tab Page
Summary
Description
The start view shows a summary of the current results of the check.
The system tries to summarize the current results of the check in such a way that the
status of the BI accelerator can be repaired in as few actions as possible.
The status display shows:
if the status is ok
if there are messages or warnings
if there are errors for the status
With status
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
and
Page 90 of 137
problem. If the proposed action can be started from the BI system, you initiate this
action by choosing
For each action, the user can display an explanatory long text by choosing
( Display Long Text ).
If details are available for a check, you can call them by choosing
( Details
Available ) in the Details column. The system displays the details in the Check
Details screen area.
Current Results
This view displays the current results of the consistency checks that were
performed.
The status of the checks are indicated by the colors
or
The system displays general information about each check: the length of the check
(in seconds), the date and time at which the check was started, and the check set
within which the check was started.
For each check, the user can display an explanatory long text in the Long Text
column by choosing
( Display Long Text ).
If a check determines that an action is required to solve the problem, the system
displays the action in the table. You start the action by choosing
column. Choose
in the Execute
If details are available for a check, you can call them by choosing
( Details
Available ) in the Details column. The system displays the details in the Check
Details screen area.
History
This view returns the results of previous runs for the BI accelerator consistency
checks so that you can track developments or changes in the results.
The system displays general information about each check: the length of the check
(in seconds), the date and time at which the check was started, and the check set
within which the check was started.
For each check, the user can display an explanatory long text in the Long Text
column by choosing
( Display Long Text ).
Since this information is not current, the system does not propose actions.
You can find out about the state of the BI accelerator at a scheduled moment in time by e-mail. More information: section Menu BIA Checks .
Check Details area
If more details are available for the results of the check, the user can display them here.
If problems are listed with BI accelerator servers, you can display the affected servers in the details area.
BIA Actions Screen Area
Execute Actions area
In the Execute Actions screen area, the user can directly execute the most important actions to resolve BI accelerator problems. The BI accelerator
differentiates between actions that the user can start from the BI system and those that the user must carry out in the BI accelerator.
Example of an action that the user must carry out in the BI accelerator: Actions for the trace files of the BI accelerator.
On the Current Results tab page in the Check Results screen area, the BI accelerator proposes actions for check results that have status
are actions that can be executed from the BI system, you can execute them directly by choosing
or
. If these
In the Execute Actions screen area, the BI accelerator monitor collects all the proposed actions. It sets the indicator if the action can be started from the BI
system. A Proposal field is displayed alongside the proposed actions.
Here the system supports the direct execution of the following actions:
Action
Description
Restart host
This action restarts all the BI accelerator servers and services. This includes the
name server and index server.
This action only restarts the index server. (The name servers are not restarted.)
If a check discovers inconsistencies in the indexes, you can use this action to delete
and rebuild all the BI accelerator indexes.
The actions Restart Host, Restart BIA Server, and Restart BIA Index Server are hierarchically related: If the host is restarted, the server is automatically
restarted so that this action no longer has to be started explicitly. For example, the Restart BIA Server action includes a restart of the BIA index server.
Therefore, as soon as a higher-level option is selected, the system automatically sets the indicator for the lower-level selection boxes and deactivates them
for the selection.
BIA Action Messages area
The log display in the BIA Action Messages
screen area shows information about the processes in the BI accelerator monitor.
If the system reads status information (gets check results), it writes this to the log, for example: Status Information Read from BIA.
Each message has a status (
The Detail Level
). Where appropriate, you can also display the explanatory long text by choosing
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 91 of 137
You use
You use
Toolbar Functions
The following functions are available in the toolbar:
Function
(Refresh BIA Log and Actions )
Description
Refreshes the monitor.
The system displays the current results of the BI accelerator checks again and
makes new proposals for actions.
BIA Availability
Uses an RFC availability test to check the availability of the connection to the BI
accelerator. If no connection to the BI accelerator is available, necessary measures
are initiated.
Calls the BI accelerator load monitor in a separate window that refreshes itself
independently.
In this window, you can see the following BI accelerator key figures:
Host:Port : Host and port of the BI Accelerator
Memory Process : Memory usage of TREX server process
Total Memory : Memory usage of all processes
Memory Available : Available memory
CPU of All Processes : CPU usage of all processes
CPU Process : CPU usage of TREX server processes
Response Time : Average response time of the last queries
Queries : Queries per second
Requests : Number of external requests
Requests Including Internal : Number of external and internal requests
Requests Active : Number of active requests
Hanging Requests : Number of hanging requests
You can only start one load monitor.
Since the load monitor is started in a new window, it uses a new mode. Make sure
that a mode is available before you activate the load monitor.
For technical reasons, the load monitor window is kept open. If it is hidden by other
windows, you can access it using the key combination ALT+TAB . You can continue
to work in this window or close it.
You can only end the load monitor from the BI accelerator monitor: As soon as you
start the load monitor, the pushbutton function changes to Switch Off BIA Load
Monitor. After you have stopped the load monitor, the system resets the mode to
the start view.
Meaning
Analysis of BI Objects
Consistency Checks
On this screen, you can check the data on the BI accelerator server, schedule these
checks, and view the logs of checks that have already run. You can group certain
checks to form check sets.
Menu BI Accelerator
You can choose the following options from the BI Accelerator menu:
Menu Option
Meaning
Execute Action
You start required actions directly from the BI system (see BIA Actions screen area
above).
You can call the load monitor of the BI accelerator in a separate window (see
Toolbar Functions above).
Index Checks
With the Execute/Display Index Checks menu option, the system executes the
following checks once a day (always at 0:00:01) and then displays the results:
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 92 of 137
You can call the following functions from this menu option:
Activities
You get detailed information about the status of the SAP NetWeaver BI Accelerator and the check results.
The system proposes actions to correct errors in the BI accelerator, if applicable.
If these actions can be started in the BI system, you can trigger them there immediately.
Integration
To navigate to this dialog box, on the SAP NetWeaver BI Accelerator Monitor screen (transaction RSDDBIAMON), choose BI Accelerator Index
Settings Change Global Parameters .
Features
Global Indexing Parameters
Name
Description
Value (Changeable)
BATCHPARA
03
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 93 of 137
10.000.000
SUBPKGSIZE
20.000
Activities
You use this dialog box to edit the values of the global indexing parameters.
Integration
BIA index-specific information can be displayed in BIA index maintenance and in the BI accelerator monitor.
BIA Index Maintenance
As soon as a BI accelerator index has been created, the system displays information about its tables and indexes on the
the BIA Index Maintenance Wizard).
BI Accelerator Monitor
If you choose BI Accelerator Index Information Display All BIA Indexes , the Information about BIA Indexes dialog box appears. The system
displays all the BI accelerator indexes that exist in the system.
Features
In the BI accelerator monitor, the system shows more information than in BIA index maintenance. The following table provides an overview of this information.
* indicates that the column is displayed in BIA index maintenance as well as in the BI accelerator monitor.
Description of a BI accelerator index
Column
Description
InfoCubes
Technical name of the InfoCubes for which BI accelerator indexes have been
created
Object Version
Status display:
BI accelerator index is active.
BI accelerator index is not active.
See Activating and Filling SAP NetWeaver BI Accelerator Indexes
Object Status
Status display:
BI accelerator index is filled.
BI accelerator index is not filled.
See Activating and Filling SAP NetWeaver BI Accelerator Indexes
Table Name *
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 94 of 137
Table Size *
Specifies the approximate current size of the individual tables (number of data
records), as calculated from the database statistics.
Index Status *
Status of index
Indicates that a delta index is being used for the BI accelerator index (see Improving
Efficiency Using SAP NetWeaver BI Accelerator Delta Indexes).
Multiple Usage *
With S, X and Y tables, this indicates that one of the tables is already being used by
another BI accelerator index and therefore already exists as an index.
Last Changed By *
For more information about building and using the analysis and repair environment, see Analysis and Repair Environment.
Integration
In the SAP NetWeaver BI Accelerator Monitor , you can specify that the system is to run a small number of tests on a daily basis. You do this by choosing
BI Accelerator Execute/Display Index Checks . For more information, see Using the SAP NetWeaver BI Accelerator Monitor.
Prerequisites
The SAP NetWeaver BI accelerator index you want to check has been activated and filled with data.
Some tests work with statistics data (see tests: Propose Delta Index for Indexes, Compare Size of Fact Tables with Fact Index ).
As a prerequisite, the statistics have to be switched on for the relevant InfoProvider. You make this setting in statistics properties maintenance screen (on
the Data Warehousing Workbench screen, choose Tools Settings for BI Statistics ). For more information, see Statistics for Maintenance
Processes of SAP NetWeaver BI Accelerator Indexes.
Features
The following tests are available under All Elementary Tests BI Accelerator :
BI Accelerator Consistency Checks
Master Data and Transaction Data
Compare Data in BI Tables and BIA Indexes ( Check Table Index Content )
The system compares the content of each individual table with the content of the corresponding index on a record-by-record basis. This check is only
suitable for tables or indexes that do not contain a large amount of data, such as dimension tables, certain SID tables (S) and attribute tables (X and Y).
This is not generally the case with fact tables. If a table contains 10,000 records or more, it is not checked.
In some situations, the content of the indexes of the BIA index may differ from the content of the corresponding database table. This may be the case if
requests have been deleted from the InfoCube or if an InfoCube has been compressed.
Check Sums of Key Figures of BIA Querie s ( Check Key Figure Sums Internally )
First the system executes a query on the BI accelerator index, which is aggregated using all key figures. Next, all the characteristics and navigation
attributes that exist in the InfoCube are included in the drilldown individually and the totals are calculated. The system compares the result with the result
of the first query. This test checks the completeness of the join path from the SID table, through the dimension table, to the fact tables.
Runtime: Depends on the number of characteristics and navigation attributes and on the number of records in the fact table.
If the test shows that the data is incorrect, you have to rebuild the BIA index and the indexes for the master data tables.
Check Sums of Key Figures of BIA Queries with Database ( Check Table Index of Key Figure Totals )
Similar to mode Internally Check Key Figure Totals , the system executes highly-aggregated queries and compares the results of the InfoCube in the
database with those of the BI accelerator index.
For large InfoCubes the runtime may already be considerable, since queries to the database take longer.
Check Existence of Indexes for Database Tables ( Table-Index Relation )
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 95 of 137
An index is created for almost every table of the BI InfoCube enhanced star schema: fact (F) tables, dimension (D) tables, SID (S) tables and attribute
tables (X and Y); the only exception is SID tables with numeric characteristic values.
This test checks whether the named indexes have been created on the BI accelerator server.
Runtime: Very fast
If the test reveals that an index is missing, rebuild the index for the table.
Check for Consistency Using Random Queries
The system creates random queries without persisting them. These random queries are only used for this test: The system reads the data once from the
database and once from the BI accelerator. It then compares the results. If the results differ, an error message is output.
Note that there can be different results if the data of the InfoCube is changed between execution of the query on the database and in the BI
accelerator (for example by a change run or by rolling up new requests).
You can verify the results by executing the program RSDRT_INFOPROV_RANDOM_QUERIES with the following parameters:
InfoProvider: Name of the InfoCube
Number of queries: 10
Starting value of random generator
Trace comparison: 'X'
You can leave all other values unchanged. The program can also be executed in the background and the results viewed in the spool list.
If you use the same starting value, the same random queries are generated; you can thus repeat the test.
Automatic repair is not available. If necessary, you must rebuild the BI accelerator index.
Verification of the Buffer Entries of the BIA Hierarchy Buffer
When queries in hierarchies are executed, the relevant hierarchy nodes are expanded to the relevant leaves. This leaf-node relation is saved in a
temporary index in the BI accelerator. The hierarchy buffer manages expanded hierarchies according to an LRU (least recently used) algorithm.
The check verifies whether all temporary indexes in the hierarchy buffer contain the correct data.
If the hierarchy buffer contains incorrect entries, write a customer message. This state is incorrect. If you urgently need to resolve the error, you can
delete the entire hierarchy buffer. In this case, however, SAP will not be able to find the error.
Metadata
Check Definition of Logical Index
The system compares the definitions of each of the indexes for a BIA index with the current versions of the database tables. It checks whether the
number, name, and type of the table fields in the database match the definition for the index on the BI accelerator server.
An index may have changed if, for example, the InfoCube was changed. If this is the case, the BI accelerator index has to be repaired (see test BIA
Index Adjustments After InfoCube Activation ).
Note that if you do not specify an InfoCube, the system executes the test for all InfoCubes that have a BI accelerator index.
If an index has been changed, the system deletes the old index, creates a new index with the correct definition, and fills it. All BI accelerator indexes that
use this index are set to "inactive"; they are not available for reporting purposes during this time.
Runtime: Depending on the size of the table, this process may take some time.
Compare Index Definition in BIA with Table on Database
The system checks the logical index of a BI accelerator index. The logical index contains the metadata of the BI accelerator index, such as the join
conditions and the names of the fields.
The logical index may change if, for example, the InfoCube has been changed. If this is the case, the BI accelerator index has to be repaired (see test
BIA Index Adjustments After InfoCube Activation ).
Note that if you do not specify an InfoCube, the system executes the test for all InfoCubes that have a BI accelerator index.
If the logical index has been changed, the system deletes the old index and creates a new index with the correct definition. The system temporarily sets
the BI accelerator index to "inactive"; it is not available for reporting purposes during this time.
Find indexes with status unknown
The system checks whether BI accelerator indexes contain indexes that have the status "unknown" (= U). This only occurs in exceptional cases when the
commit call (commit optimize) terminates during indexing. Since in this case it is not clear whether the data from the preceding indexing call is
available, the affected indexes are rebuilt in repair mode.
BI Accelerator Performance Checks
Size of Delta Index
If you have chosen delta mode for an index of a table, new data is not written to the main index but to the delta index. This can significantly improve
performance during indexing. However, if the delta index is large, this can have a negative impact on performance when you execute queries. When the
delta index reaches 10% of the main index, the system displays a warning.
The system performs a merge for the index in repair mode. The settings are retained.
Propose Delta Index for Indexes
It is useful to create a delta index for large indexes that are often updated with new data. New data is not written to the main index, but to the delta index.
This can significantly improve the performance of indexing, since the system only performs the optimize step on the smaller set of data from the delta
index. The data from the delta index is used at query runtime.
The system determines proposals from the statistics data: Proposals are those indexes that received new data more than 10 times during the last 10
days. A prerequisite for these proposals is that the statistics for the InfoCube are switched on.
Data in the main index and delta index should be merged at regular intervals (see test Size of Delta Index ).
In repair mode, the system sets the Has Delta Index property for the proposed indexes. The delta index is created when the data is next loaded for this
index.
Compare Size of Fact Tables with Fact Index
The system calculates the number of records in both fact tables (E and F tables) for the InfoCube and compares them with the number of records in the
fact index of the BI accelerator index. If the number of records in the BI accelerator index is significantly greater than the number in the InfoCube (more
than a 10% difference), you can improve query performance by rebuilding the BIA index.
The following circumstances can result in differences in the numbers of records:
The InfoCube was compressed after the BI accelerator index was built. Since the BI accelerator index is not compressed, it may contain more
records than the InfoCube.
Requests were deleted from the InfoCube after the BI accelerator index was built. The requests are deleted from the BIA index in the package
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 96 of 137
dimension only. The records in the fact index are therefore no longer referenced and no longer taken into account when the query is executed;
however, they are not deleted.
Note that the database statistics for calculating the size of the fact table must be up to date, since the test does not recount; it uses the
database statistics from the tables.
Load BIA Index Data into Main Memory
You use this test to load all the data for a BI accelerator from the file server into the main memory if the data is not already in the main memory.
This action is useful if you want to ensure that queries executed in the corresponding InfoCube achieve optimal performance the first time they are
executed and do not have to read data anew from the file server.
Data for an index is deleted from the main memory, for example, when new data is added to this index (during roll up or a change run). In BIA index
maintenance (choose
BIA Index Properties , see Using the BIA Index Maintenance Wizard), you can also adjust the settings for the BI accelerator index
such that data is loaded automatically to the main memory every time changes are made.
Note that if you do not specify an InfoCube, the system executes the test for all BI accelerator indexes that are active and filled.
Delete BIA Index Data from Main Memory
You use this test to delete all data for a BI accelerator index from the main memory.
Master data indexes that are still required by other InfoCubes are not deleted from the main memory. The data on the file server is not deleted the BI
accelerator index is still active.
This action is useful if there is little space in the main memory on the BIA server and you have data in the main memory that can be deleted. This is
useful in the following cases:
There is data in the main memory that is no longer used or is rarely used.
There is data in the main memory that does place a high load on system performance when the query is executed initially (and when the file server is
read in the main memory).
If you do not specify an InfoCube, the system runs the test for all BI accelerator indexes that are active and filled.
Estimate Runtime of Fact Table Indexing
The system estimates the time required to fill the fact index. It uses the current parameter values for background and dialog parallel processing. The time
taken is calculated from the processes available and the estimated maximal throughput of data records in the database, the application server, and the
BIA server.
The calculated duration is an estimate; the load on the system, the distribution of data across block criteria and deviations during processing can all affect
the actual time taken.
Estimate Memory Consumption of Fact Table Index
The system estimates the size of the fact table index of a BI accelerator index. In doing so, the system analyzes the data in the fact table and provides a
projection.
Note that if data distribution is poor, the actual memory consumption can deviate from the projected value. A more exact analysis would
demand more time than that required to rebuild the index, since the number of different values in the fact table needs to be determined for each
column (count distinct).
BI Accelerator Repair Programs
Delete and Rebuild All BIA Indexes
All BI accelerator indexes in the system are deleted. If you selected the Execute option, the indexes are then recreated and filled. This is sometimes
required for a successful restart with consistent data if a critical error occurs.
BIA Index Adjustments After InfoCube Activation
If an InfoCube is changed as a result of the addition of a key figure, for example, the system does not automatically adjust the BI accelerator index, since
the relevant process may take a long time and can even require a partial reindexing.
When you execute this test, information about any changes identified are written to the log. The system makes the required changes in repair mode.
We recommend that you run this repair job as a background job, if required.
Rebuild All Master Data Indexes of a BIA Index
All indexes for master data tables in a BI accelerator index are rebuilt. This includes indexes for SID tables and attribute tables (X and Y tables). When an
entire BI accelerator index is rebuilt, these tables are not always rebuilt since they are also used by other BI accelerator indexes. If this results in data
consistency issues, it may be necessary to rebuild the indexes for the master data tables.
In repair mode, the system first deletes the relevant indexes and then recreates them. All BI accelerator indexes that use these indexes are set to
"inactive"; they are not available for reporting purposes during this time.
The following tests are available under All Combined Tests BI Accelerator :
Consistency Checks (Detailed)
Consistency Checks (Fast)
Performance Tests
Execution Modes
Execution Mode
Description
Schedule
The dialog box for specifying start dates appears. Specify the time(s) for the
execution. You can view the results of the check in the protocols in the application
log.
Correct Error
In repair mode, the system performs certain repair tasks. (Repair tasks are not
available for all tests).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 97 of 137
Evaluating Logs
1. In the Selection of Check Mode for BI Accelerator Index dialog box, choose Display Logs . The Analyze Application Log screen appears for object
RSDDTREX, subobject TAGGRCHECK.
2. Enter the required data to restrict the number of logs.
3. Choose
Activities
You select the test(s) and specify the mode of execution. You can view the results of the check in the protocols in the application log.
Procedure
Creating a New Check Set
1. Give the check set a description.
2. Specify the InfoCubes of the BIA index for which the check set is to be executed. Input help is available. Multiple selections are possible.
3. Specify the maximum degree of parallelization if necessary. The degree of parallelization is only applicable for background processing. The system
starts one process (dialog) for each InfoCube; a maximum of n processes are executed simultaneously (n = parameter value).
4. If necessary, set the indicator If errors occur deactivate BIA index for queries . If you set this indicator, the BIA index is immediately set to 'inactive'
(cannot be used for queries) as soon as the check set displays incorrect data in the BIA index. This prevents incorrect data being used for reporting in the
BIA. Note, however, that a check can display incorrect data even though the data is correct, for example, because a load process (master data or
transaction data) has changed the data at the same time.
5. If you want an e-mail to be sent if an error occurs (if incorrect data is displayed), enter the address of the recipient in the relevant field.
6. If the check set is to be executed immediately after the rollup of new requests to an InfoCube, set the relevant indicator. The check set is then still part of
the process (this is relevant for integration into a process chain), but the lock on the process is no longer valid, so that other processes are not
interrupted. The check set is not executed for all InfoCubes, but only for the InfoCube for which the data was rolled up.
7. If the check set is to be executed immediately after the change run, set the relevant indicator. As before, the check set is still part of the process, but the
lock on the process is no longer valid. The check set is only executed for the InfoCubes whose BIA index was adjusted in the change run.
8. Each tab page contains a test. You can find the description of the test under Details of Check. Select the checks relevant for your check set by setting
the corresponding indicator for Execute Test . Select the check-specific options.
Overview of Consistency Checks
Tab Page
Test Description
Data Compar.
Totals in BIA
Random Queries
Index Exist.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 98 of 137
Choose Schedule to open the Start Time dialog box. Here you can schedule the check set to run once or periodically in the background. The check set
must be saved beforehand. The name of the scheduled job is BW_TR_RSDDTREX_INDEX_CHECK.
You can also execute a check set by using program RSDDTREX_INDEX_CHECK. To do this, you need the check set ID, or you can select the check set
from the input help. You can also use this program to add a check set to process chains. To call the logs, choose Logs .
Deleting a Check Set
Select an existing check set, choose Delete , and answer Yes to the confirmation prompt.
Integration
Some BI accelerator tests in the analysis and repair environment work with statistics data (see Checking SAP NetWeaver BI Accelerator Indexes (Transaction
RSRV), tests: Propose Delta Index for Indexes, Compare Size of Fact Tables with Fact Index ).
Prerequisites
The statistics have to be switched on for the relevant InfoProviders. You make this setting in statistics properties maintenance screen (on the Data
Warehousing Workbench screen, choose Tools Settings for BI Statistics ). For more information, see Maintenance of Statistics Properties.
Features
The statistics table contains the following information for each table that is indexed:
RSDDSTATTREX
Column
Description
STATUID
TABLNM
Table name
CHANGEMODE
Specifies whether the process is part of a BI accelerator rebuild ("N"), the rollup
("R") or a modification after a change run ("C").
FILLMODE
TIMEACTIVATE
Time of activation
TIMEREAD
TIMEFILL
TIMEOPTIMIZE
TIMECOMMIT
REC_INSERTED
TSTPNM
User
TIMESTMP
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Page 99 of 137
This may be the case, for example, if you are obtaining different results for a BI query depending on whether you use the SAP NetWeaver BI
Accelerator.
Integration
To record traces for query execution, use the query monitor (see Query Monitor).
To record performance traces, use the BI accelerator monitor (see Using the SAP NetWeaver BI Accelerator Monitor).
Activities
Trace for Queries in the Query Monitor
Activation
In the query monitor you can execute and debug queries.
1. Select the query for which you want to record a trace.
2. Choose
Execute and Debug . The Debug Options dialog box appears. The options are ordered in a hierarchy.
3. Choose BIA Server BIA Default Trace .
If you set the indicator for the BIA Default Trace , the system automatically activates all the traces listed under this option that log information about the
query that is currently being executed.
You can also choose a single trace type.
Overview of BIA Default Traces
Trace Type
Description
The BI accelerator index server is traced. The system generates a Python program
that can be executed.
To find out the selections for a query, for example, support can reproduce a query
(without recording the ABAP read interface).
The system records the trace with particular internal settings (trace levels). The
result is returned in the form of a text file, is linked to the query, and is only valid for
this query. This trace records error messages.
If, for example, a query throws an exception, you can replay the trace to receive
more precise error messages.
Display
If you have activated one of the three trace types, the system displays the trace after the query has been executed. You can edit the trace file and save it
locally.
Runtime problems may arise for large trace files. For this reason, you can also save the trace file without displaying it and editing it.
Performance Trace in the SAP NetWeaver BI Accelerator Monitor
Activation
You can activate a performance trace in the BI accelerator monitor. This logs system responses. SAP support has tools for evaluating these system
responses. The trace is written in save-optimized format (*.tpt).
To activate a trace, choose Performance Trace Start Trace Recording from the menu.
A dialog box appears in which you set:
whether you want to start the trace for a particular user
when you want to stop the trace
For performance reasons, we recommend that you do not choose a time that is too far in the future.
In the status bar, the system shows how long trace recording has left to run (for example,
BI Accelerator Monitor (Trace Recording Still Active 00:10:30) .
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Start time
Stop time
Remaining time
Users
File size in kilobytes
In all other cases, if relational aggregates are not sufficient, are too complex, or have other disadvantages, we recommend that you use BI
Accelerator (see Performance Optimization with SAP NetWeaver BI Accelerator).
Integration
Accessing the OLAP Processor
You can use relational aggregates and a BI accelerator index for the same InfoCube. A BI query always tries to use performance-optimized sources by
checking the sources from which it can draw the requested data. It checks the sources in the following order:
1.
2.
3.
4.
OLAP cache
BI Accelerator index
Relational aggregates from the database
InfoCubes from the database
If an active BI accelerator index exists, the OLAP processor always accesses this BI accelerator index and not the relational aggregates. Therefore, with
regard to modeling, we recommend that you create either relational aggregates or a BI accelerator index for an InfoCube.
Query Execution
When the query is executed, it is clear to the user whether data is being read from an aggregate, a BI accelerator index, or an InfoCube.
In the maintenance transaction, you can deactivate one or more aggregates on a temporary basis to test them for performance purposes or to analyze data
consistency.
You can also execute the relevant query in the query monitor (transaction RSRT) using a corresponding debug option: Choose
Execute + Debug . In the
Debug Options dialog box, choose Do Not Use Aggregates to execute the query with an InfoCube, as long as no BI accelerator index exists.
1.2.2.1 Aggregates
Definition
An aggregate is a materialized, aggregated view of the data in an InfoCube. In an aggregate, the dataset of an InfoCube is saved redundantly and persistently
in a consolidated form on the database.
Use
Aggregates allow quick access to InfoCube data during reporting. Similar to database indexes, they serve to improve performance.
Aggregates are particularly useful in the following cases:
Executing and navigating in query data leads to delays if you have a group of queries
You want to speed up the execution and navigation of a specific query
You often use attributes in queries
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
You want to speed up reporting with characteristic hierarchies by aggregating specific hierarchy levels
Structure
An aggregate is made up of the characteristics and navigation attributes that belong to an InfoCube. Characteristics that are not used in the aggregate are
compressed.
Each component of an aggregate has to be assigned to a selection type. A selection type indicates the degree of detail to which the data in the underlying
InfoCube is aggregated. You can choose one of the following selection types:
All characteristic values ("*"): The data is grouped by all values of the characteristic or navigation attributes (see Selection Type "All Characteristic
Values" (*)sap).
Hierarchy level (H): The data is grouped by the hierarchy level node. You can also store values on the hierarchy levels of an external hierarchy (see
Selection Type "Hierarchy Level" (H)sap).
Fixed value (F): The data is filtered by a single value (see Selection Type "Fixed Value" (F)sap).
You can use both time-dependent attributes and time-dependent hierarchies in aggregates.
Integration
Access to Aggregates During Reporting
If you have created an aggregate for an InfoCube, activated it and entered data for it, the OLAP processor can access these aggregates automatically. For
more information about the order in which the OLAP processor accesses the aggregates, see Performance Optimization with Aggregates. The different results
are consistent when you navigate in the BI query. The aggregate is transparent for the user.
System Response Upon Changes to Data
New data is loaded to an aggregate at a defined time using logical data packages (requests). After this operation (roll up), the new data is available in reporting
(see System Response Upon Changes to Master Data and Hierarchies)
See also:
Performance Tuning for Queries with BI Aggregatesin SAP Service Marketplace on the SAP NetWeaver Business Intelligence performance page.
CUSTOMER
SALES
USA
Buggy Soft
10
Germany
Ocean Networks
15
USA
Funny Duds
Austria
Ocean Networks
10
Austria
Thor Industries
10
Germany
Funny Duds
20
USA
Buggy Soft
25
SALES
USA
40
Germany
35
Austria
20
The data for key figure SALES is listed for the sum of the sales for each country and not for individual customers.
The aggregate can be used
In a query that determines the sales for each country or the total sales
For evaluations based on a navigation attribute for characteristic COUNTRY or a hierarchy of the countries
You cannot use the aggregate if characteristic CUSTOMER is used for drilldown or is selected in a query because the aggregate does not contain any
information on the customer.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
INDUSTRY
Buggy Soft
Technology
Ocean Networks
Technology
Funny Duds
Consumer products
Thor Industries
Chemical industry
SALES
Technology
60
Consumer products
25
Chemical industry
10
FROM
TO
SALES PERSON
USA
01.01.2000
31.12.2000
Smith
USA
01.01.2000
31.12.2001
Miller
Germany
01.01.2000
31.03.2001
Meyer
Germany
01.04.2000
31.12.2001
Huber
Austria
01.01.2000
31.12.2001
Huber
SALES
Miller
40
Huber
55
The aggregate can be used in a query that has the same key date as the aggregate.
CUSTOMER
SALES
Germany
Ocean Networks
15
Germany
Funny Duds
20
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
The aggregate can only be used in queries that have the same filter value.
The aggregate cannot be used if other countries are required in a query.
We recommend that you use filter values for aggregates if only one variant is needed for reporting, for example:
The plan/actual indicator
The current fiscal year
Specific versions
SALES
America
40
Europe
55
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Procedure
1. In the context menu for the InfoCube , choose Maintain Aggregate . The Proposals for Aggregates dialog box appears.
2. Specify if you want the system to propose aggregates.
Function
Create proposals
You can change these proposals by using Drag & Drop to add or remove
dimensions, characteristics or attributes.
Create yourself
Check Definition .
Result
You can activate the aggregates you have created and fill them with data.
Procedure
How you manually create or change an aggregate for an InfoCube is described below.
Access from Data Warehousing Workbench
1. You are in the Data Warehousing Workbench in the Modeling functional area. In the navigation window, choose InfoProvider and in the InfoProvider
tree, navigate to the InfoCube whose queries you want to optimize.
2. In the context menu of the InfoCube , choose Maintain Aggregates . The Maintain Aggregates screen appears . If an aggregate has already been
created for the selected InfoCube, you can also get to the maintenance screen by double-clicking on
If you are creating the first aggregate for an InfoCube, the Proposals for Aggregates dialog box appears first. You can choose whether the
system proposes aggregates or whether you want to create them manually.
For more information, see Creating the First Aggregate for an InfoCube .
3. The left side of the screen shows the dimensions, characteristics and navigation attributes of the selected InfoCube in a tree structure as Selection
Options for Aggregates .
Select one or more objects to be copied to the aggregate.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Define the granularity you require for the data in the aggregate. Add all the characteristics derived from these characteristics.
For example, if you define an aggregate for the month, you should also include the quarter and year in the aggregate.
This enhancement does not enlarge the dataset, but allows:
A year aggregate to be built from this aggregate
Those who need the annual values to use the queries for this aggregate.
You can only include a characteristic and one of its attributes in an aggregate in expert mode ( Extras Switch Expert Mode On/Off ). An
aggregate of this type has the same granularity and size as an aggregate that has only been built using the characteristic, but is affected by
the hierarchy/attribute change run. Compared with the aggregate for the characteristic in which the attribute information is defined by a join with
the master data table, the aggregate for the characteristic and the attribute only saves the database join.
We therefore recommend that you either build an aggregate using the characteristic or you build a much smaller aggregate using the attribute.
4. You have different options for creating an aggregate:
Transfer the selected object(s) to the Aggregates column on the right side of the screen using drag and drop.
Select
Create New Aggregate .
The Enter Description for Aggregate dialog box appears.
5. Enter:
Short description
Long description
To change the text at a later time, select Change Description Text in the context menu of the aggregate.
6. Choose
Continue . The Maintain Aggregates screen appears.
The system displays the aggregate in the top-right area of the screen. The log is displayed in the lower part of the screen.
For more information, see Design and Components of Aggregates .
7. If an aggregate contains a time-dependent component, you must assign a key date to the aggregate.
When you fill the aggregate, the key date behaves like the key date of a query: The time-dependent attributes and hierarchies are evaluated on
this key date. For this reason, aggregates with a time-dependent component can only be used in a query if the key date of the query is the
same as the key date of the aggregate.
In the Select Variable or Fixed Date dialog box, select the following as the key date:
A variable that is also used in queries for the key date and can be automatically calculated in the SAP Exit or Customer Exit processing types (see
Variables),
Aggregates with a variable key date must be updated regularly. You have to include this process in a process chain ( Further BI Processes ->
Adjust Time-Dependent Aggregates ).
A particular calendar day
To enter a calendar day, select object CALENDAR (<Calendar>) in the Select Variable or Fixed Date dialog box. Choose
Selection. The Calendar dialog box appears. You can copy this date to the aggregate definition by double-clicking on it.
Transfer
If aggregates do not contain much data, very small partitions can result. This affects read performance. Aggregates with very little data should
not be partitioned. Note that if you change this property to Not Partitioned for an existing aggregate, you have to activate and fill the
aggregate again.
9. You can change the structure of the aggregate by adding additional components or deleting existing ones. You can also change the key date.
Inserting components into the aggregate
i. Select one or more objects in Selection Options for Aggregates .
ii. Use drag and drop to transfer them to the aggregate that you want to change on the right-hand of the screen.
Where necessary, change the selection type (BW-WHM) by choosing the appropriate entry in the context menu:
All characteristic values
Hierarchy level
Fixed value
Aggregates containing fewer than 14 components are stored on the database in optimized form (see Loading Data into Aggregates Efficiently).
Note that the characteristics that are defined in the InfoCube are also included in the aggregate and thus increase the number of components,
even though they are not visible on the interface.
Deleting components from the aggregate
i. In the aggregate tree, navigate to the characteristic(s) or navigation attributes that you want to delete.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
To delete a dimension from an aggregate, you have to delete all the characteristics and navigation attributes of this dimension.
Changing the key date
i. In the aggregate tree under
Properties , select the node
or Fixed Date dialog box appears.
ii. Select and transfer the required variable or calendar day.
Variable for Key Date and choose Change in the context menu. The Select Variable
The key date computed from a changed variable is only copied to line
been executed.
10. To check the aggregate definition for inconsistencies, choose
11. Save the new or changed aggregate.
Check Definition .
Result
You can activate the new or changed aggregate and fill it with data. It is available for reporting.
Use
This screen area is used to:
Define aggregates
Obtain information about the status of individual aggregates
You can define several aggregates for an InfoCube. However, make sure that this is useful:
Advantage: Aggregates improve the performance of queries
Disadvantage: Aggregates increase load time
when uploading data packages
during the hierarchy/attribute change run after loading master data
when modifying time-dependent aggregates.
To optimize an InfoCube, you should repeatedly check:
Whether aggregates are missing: Create new aggregates.
Whether existing aggregates are still being used: Delete unnecessary aggregates.
The aggregates display in the Maintenance for Aggregate screen helps you to evaluate aggregates.
Structure
Column
Information
Aggregates
Technical Name
Save
Proposed Action
Once the system has proposed a new aggregate, it recommends that you activate
this aggregate. This is marked by
Status
in this column.
Created
Changed (the modified aggregate definition is no longer the same as the active
aggregate definition)
Saved and active
Filled/Switched Off
By default, the system aggregates according to the values of the selected objects:
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
F Fixed value
In the context menu of the selection type, you can change this setting.
Hierarchies
Hierarchy Level
Fixed Value
Valuation
The greater the number of minus signs, the worse the evaluation of the
aggregate:
"-----" means: Aggregate can potentially be deleted.
The larger the number of plus signs, the better the evaluation of the
aggregate:
"+++++" means: Aggregate potentially very useful.
The evaluation is based on a number of criteria. The following criteria are currently
used in the evaluation:
How does the compression of the data compare with the InfoCube: How
much smaller is the aggregate compared to the InfoCube?
When was the aggregate last used? (See under Last Used ).
Records
Number of records read on average from the source to create a record in the
aggregate.
This example is based on an aggregate with three records. If 10 records are read
for the first record in the aggregate, 15 records are read for the second record
and 20 records are read for the third record, the aggregate has a mean of 15
"compressed records".
This provides information about the quality of the aggregate.
The greater the value, the greater the compression and better the quality
of the aggregate.
Since an aggregate should be 10 times smaller than its source, the number
should be larger than 10.
If the value is one, the aggregate is a copy of the InfoCube. In this case,
consider deleting the aggregate.
Use
Last Used
Date
When was the aggregate last used in reporting?
If an aggregate has not been used for a long time, deactivate or delete it.
Note that certain aggregates cannot be used at certain times (for example during
vacation).
Do not delete basic aggregates that you created to speed up the change run.
Last Roll Up
Date
When was data last entered for the aggregate?
Last Roll Up By
Last Changed On
Date
When was the aggregate definition last changed?
Last Changed By
Integration
Log display
When you have defined the aggregate and filled it with data, the log display is shown in the lower part of the screen.
Status overview of aggregates
There is a status display for all the aggregates in the BI system in the Administration functional area of the Data Warehousing Workbench:
In the navigation pane, under Monitors , choose
aggregates under each InfoCube.
Aggregates . The Status of the Aggregates screen area lists all the InfoCubes and all the existing
The structure of this screen area corresponds to the Aggregate screen area of the Maintenance for Aggregate screen. Only the status of the aggregates is
displayed, not their components.
By double-clicking on
for a particular aggregate, the Aggregate Display screen appears. The aggregate is displayed here together with its components.
In order to work efficiently with aggregates, you must check the structure and use of the aggregates. You can save time during uploading by either
deactivating or deleting the aggregates that you no longer need for reporting.
Functions
Function
Aggregate Tree
Information
The Aggregate Tree dialog box appears.
The system shows how the aggregates of an InfoCube lie in relationship to one
another. In other words, which aggregate can be built from which other aggregate.
With the help of the aggregate tree, you can identify similar aggregates and
manually optimize the specific aggregates on this basis.
Choose automatic optimization in the Maintenance for Aggregate screen from
Propose Optimize . For further information about automatic optimization, look
under Automatically Selecting and Optimizing Aggregates).
Switch On/Off
You can temporarily switch off an aggregate to check if you need to use it. An
aggregate that is switched off is not used when a query is executed.
To do this select the relevant aggregate and choose
Switch On/Off . An
The system deletes all the data and database tables of an aggregate. The definition
of the aggregate is not deleted.
Select the required aggregate and choose
Deactivate.
The status display in the columns Status and Filled/switched off change back to
.
If you want to, you can activate and fill the aggregate again later.
Delete
The system deactivates the aggregate and deletes the definition of the aggregate.
To do this, select the aggregate to be deleted and select the delete function either
with
Prerequisites
An active version of the InfoCube is available.
Before you can use the query statistics, queries have to exist for the InfoCube.
These must be collected before BI statistical data can be analyzed. In order to collect statistical data, the corresponding function has to be activated for the
InfoCube ( DW Workbench Tools BI Statistics for InfoCubes ) and queries must have already been executed.
Features
You can choose Proposals in the menu if you want the system to propose aggregates. You have the following options:
Proposals from queries: The system considers the queries that are created for an InfoCube.
Proposals from the previous navigation: The system evaluates the last navigational step that you carried out with a query.
Proposals from BI Statistics (tables): The system considers BI statistical data (database tables).
Proposals from BI Statistics (InfoCube): The system considers the data that is contained in the BI Statistics InfoCube.
You can also postprocess the aggregates and add or delete characteristics.
See also:
Proposals from Queries
Proposals from BI Statistics
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
We recommend that you use this function the first time you optimize the InfoCube. If you have already executed queries, use the other
options for optimizing, because the number of times a query has been executed and the individual navigational steps are also taken into
account.
The minimal aggregate MIN corresponds to the smallest aggregate possible. This only contains the data that is needed for the initial drilldown on a query.
See also:
Proposals from BI Statistics
Optimizing Proposed Aggregates
We recommend that you use this function if representative statistical data already exists. You run this function at regular intervals to modify
the aggregates in accordance with changes to user actions.
You can evaluate the data saved in BI Statistics (database) or BI Statistics (InfoCube) by selecting the menu path Proposals Proposals from BI
Statistics (Tables) or Proposals from BI Statistics (InfoCube) . You can restrict the analysis to a subset of the data by specifying an interval for the start time
or runtime of the query.
After the data has been read from BI Statistics, the optimal aggregate is determined for every navigation step, and a list of the different aggregates is created.
For aggregates in a component that vary only in terms of their selection type, all aggregates with the selection type hierarchy (H) or fixed
value (F) are replaced with an aggregate that is grouped by characteristic values (*). This is possible as long as the aggregate has already
been proposed.
The list of proposed aggregates has the same structure as the proposals from BI queries. However, the list is generally longer.
You can modify or delete the proposed aggregates.
See also:
Proposals from Queries
Optimizing Proposed Aggregates
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Activities
You can run a simplified optimization by choosing Proposal Optimize.
For this optimization it is usually assumed (heuristics), that the number of aggregates should be reduced first. In addition, those aggregates are selected that
have been called up least often and together account for 20% of all calls. These aggregates are checked, one after the other, to see if there are any
aggregates with exactly one extra component.
If the system finds more than one aggregate with exactly one additional component, it chooses the aggregate that has been called the most number of times.
The calls for the aggregates that have been checked are added to this number. The checked aggregate (from the 20% quantity) is then deleted from the list of
proposed aggregates.
However, this only happens if the number of calls for the checked aggregate is not more than double the calls for the aggregate with the extra
components. This prevents aggregates from being replaced by others that are used relatively rarely.
You can continue to optimize until the point where the aggregate is small enough, or until no more aggregates can be compressed.
Since the optimizer has no information about the data structure, you should check the proposals again before filling aggregates with data. For
example, a proposed aggregate may contain a characteristic that would make the aggregate almost the same size as the InfoCube. This would
mean that when the aggregate is filled, the system virtually creates a copy of the InfoCube. This is not generally the objective when using
aggregates.
See also:
(See System Response Upon Changes to Master Data and Hierarchies)
Prerequisites
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Procedure
Select the aggregate that you want to activate and fill.
Choose
Activate and Fill . The system creates an active version of the aggregate.
System Activity
Result
The system creates the tables required by the aggregate definition in the
database. Aggregates are created according to the same schema as
InfoCubes.
If the aggregate was activated successfully, the status display in the Status
column changes from
for a newly create aggregate or
for a changed
aggregate to
.
Since it can take a long time to build an aggregate from an InfoCube, all the aggregates are filled in the background.
Note that an aggregate can also read data from a larger aggregate that is already filled. You can therefore assign data to very compressed
aggregates quickly.
6. Define when you want to start the job to fill the aggregate:
now
later
This takes you to the Time of Subsequent Aggregation dialog box. Enter the date and time for the background processing.
7. In the Subsequently Aggregate the Aggregates of an InfoCube dialog box, choose
By using the transaction SLG1, you can directly access the application log even if the job is not canceled. The Evaluate Application Log
dialog box appears.
Enter the required data in the following fields:
Field labels
Entry
Object
RSSM
(Scheduler; Monitor; Tree Callback)
Subobject
MON
(Monitor)
Ext. Identif.
(External Identification)
8. Choose
System Activity
.
Result
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
The system reports all the executed actions in the lower right part of the log
If the aggregate was filled successfully, the status display in the column
display.
Filled/Switched off, changes from
to
.
If you used a variable for the key date in an aggregate with time-dependent
attributes or hierarchies, the system evaluates this variable when it fills it and
builds the aggregate on the computed key date.
To activate and fill a number of aggregates at the same time, select them and choose
The system chooses the optimum sequence for filling the aggregates.
1. The large, detailed aggregates are filled first.
2. The smaller, very compressed aggregates are filled next.
The larger aggregates can therefore be used already when you are still building the smaller ones.
Result
The active aggregate that is filled with data can be used for reporting. If the aggregate contains data that is to be evaluated by a query, the query data is read
automatically from the aggregate.
You can find more information about the number of records read and the use of an aggregate in queries in
Displaying Aggregates and their Components.
You can roll up new data packages (requests) into the aggregate. For more information see
Rolling up Data into an Aggregate.
Prerequisites
New data packages (requests) have been loaded into an InfoCube.
Aggregates for this InfoCube have been activated and filled with data.
Procedure
In InfoCube maintenance, you can specify how the data packages are rolled up in the aggregate. You can do this on an InfoCube-by-InfoCube basis.
You are in the Data Warehousing Workbench in the Modeling functional area. In the context menu of the required InfoCube, choose Manage .
The system copies the InfoCube data into the table at the top of the screen.
To optimize data load performance, you can specify that you want to automatically delete indexes before the load operation and recreate them
when the data load is complete. You do this on the Performance tab page. Building indexes in this way accelerates the data load process, but
has a negative impact on system performance when the data is read. Only use this method if no read process takes place while the data is
being loaded.
If you want to switch on index building during roll up anyway, choose Create Index (Batch) and select the required options: Delete InfoCube
Indexes Before Each Data Load and then Refresh or Also Delete and then Refresh Indexes with Each Delta Upload .
Type of Execution
Include roll-up of the data packages as a process in a process chain
Procedure
Call transaction RSPC. This opens the Process Chain Selection dialog box.
This provides an overview of the various process chains in the BW system.
If you cannot find any suitable process chains, you can create a new process
chain for the roll up.
For more information, see Creating Process Chains in Creating Process
Chains by Using a Maintenance Dialog for a Process.
Start data package roll up manually
Use this procedure in particular if data from several data packages creates a
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Use this procedure in particular if data from several data packages creates a
logical unit, and therefore should to be released together.
Different plants deliver their data at different points in time. The data only needs
to be visible in the InfoCube when all plants have loaded their data into the
InfoCube.
Immediately
Date/time
After job
After event
In operation mode
4. If you want to run the job periodically, set the corresponding flag .
5. Save your entries.
The following procedures are also possible but should not be used for new scenarios.
Type of Execution
Procedure
Postprocessing
The InfoCube must be technically correct and you must be sure of its quality.
Only use automatic roll-up if you load requests into the InfoCube in such a way
that there is no time overlap for the load process, the roll-up and other
automatisms in the InfoCube.
For more information, see Automatic Further Processing.
Program RSDDK_AGGREGATES_ROLLUP
You can schedule this program as a regular background job or use it in an Event
Collector.
Result
The new data is available in reporting for queries started after the roll up.
See also:
Managing InfoCubes
Note that with a structural change, all aggregates of all InfoCubes are modified if they are affected by the changes to the hierarchies and
InfoObjects. This may take some time.
You can still report on the old hierarchies and attributes during the change run.
Integration
If the changes affect an amount of data that exceeds a certain threshold value, modifying the aggregate is more time consuming than reconstructing it. You
can change this threshold value. In the Implementation Guide (IMG), choose SAP NetWeaver Business Intelligence Performance Settings
Parameters for Aggregates in the section Percentage Change in the Delta Process . In the Limit with Delta field, enter the required percentage (a number
between 0 and 99). 0 means that the aggregate is always reconstructed. Keep changing these parameters until the system response is as quick as possible.
Features
If an aggregate is affected by changes to the data, it is either modified (in a delta process) or reconstructed. When you modify an aggregate, the obsolete data
records are posted negatively and the new data records are posted positively.
You can modify aggregates manually or automatically using a program.
You can start multiple change runs simultaneously. The prerequisite for this is that the lists of master data and hierarchies to be activated are different, and
that the changes affect different InfoCubes. If a change run terminates, you have to start the same change run again. You do this by starting the change run
again with the same parameters (same list of characteristics and hierarchies).
Activities
Manual Modification
1. In the Data Warehousing Workbench menu, select Tools Hierarchy/Attribute Changes . (Alternatively, in the Data Warehousing Workbench, in the
Administration functional area, choose Change Run ). The Execute Hierarchy/Attribute Changes for Reporting screen appears. On this screen, all the
executed change runs are listed with detailed information. Even if the application logs for change runs have been deleted, the change runs will still be
displayed in the history. All the InfoObjects and hierarchies that are scheduled for the structural change are selected by default.
2. If you only want to carry out the structural changes for individual InfoObjects and hierarchies, select the InfoObject List or Hierarchy List pushbutton.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Remove from the list the InfoObjects and hierarchies that you do not want to change.
3. Schedule a new structural change by choosing Selection and specifying the start date. The status of the structural change is displayed in the upper
section of the screen.
Automatic Modification
To schedule the same function when
You can also include the program in a process chain and schedule it regularly in background processing.
The attributes for characteristic 0MATERIAL are uploaded weekly. You can schedule the program so that it starts after the upload. You can
also use the InfoObject list 0MATERIAL as a variant so that only changes that are made to the material attributes are taken into account.
Other changes, which may only be needed later, are ignored.
Choose
Log to display the messages for the hierarchies and attributes that have been changed manually or automatically.
You are in the Data Warehousing Workbench in the Modeling functional area. In the InfoProvider tree, navigate to the required InfoCube.
In the context menu of the InfoCube, choose Manage .
In the lower part of the screen, select the Roll Up tab page.
Under the Aggregates group header, set the corresponding indicator in the Compress After Roll Up field.
Alternatively you can set automatic compression after roll up. This is described below:
You are in the Data Warehousing Workbench in the Modeling area. In the context menu of the required InfoCube, choose Display or
Change . Choose Environment InfoProvider Properties Display or Change . On the Roll Up tab page, choose option Compress
After Roll Up .
Indicator
Information
Set (default):
Automatic compression switched on
Not set:
Automatic compression switched off
Business
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Description
AGGRFILL
ATTRIBCHAN
CHECKAGGR
CONDAGGR
Compress aggregates
ROLLUP
Roll up
For roll up you can also make these settings in the InfoCube:
You are in the Data Warehousing Workbench in the Modeling area. In the context menu of the required InfoCube, choose Manage . On the
Roll Up tab page, choose
Parallel Processing . A dialog box appears in which you can define settings for parallel processing.
For the change run, you can also make the settings in the Administration functional area of the Data Warehousing Workbench. Go to
Change Run : Under the group header Executing Change Runs , choose
define settings for parallel processing.
By default, the system executes a maximum of three parallel processes. You can change this setting ( Number of Processes ) for each individual process
type. In process chains, the affected setting can be overridden for each of the processes listed above.
Note that fill, roll up and change run each consist of several subprocesses, all of which are processed in parallel.
If you do not want the system to respond in this way, you can set parameters for the InfoCube so that the system does not automatically
compress the aggregate (see section Setting Automatic Compression above). In addition, you can add the Compress Aggregate process as
a subsequent process to the Roll Up process in a process chain. In this case, the system applies the compression settings that you set in
the BI Background Management transaction (transaction RSBATCH). In the example above, the system executes roll up in five parallel
processes and compression in two.
The parallel processes are executed in the background, even if the main process is executed in the dialog. This can considerably decrease execution time for
these processes. You can determine the degree of parallelization and specify the server on which the processes are to run and with which priority (job
category). Job category A has the highest priority, followed by category B and finally C.
Note that if you choose more than two parallel processes ( Number of Processes ), one process monitors the other processes and divides the
work packages. You always have one process less in actual usage than the number of processes selected in the settings.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
You can use check aggregates or strict characteristic restrictions to run quick aggregate checks on a daily basis after roll up and, in addition,
schedule a complete check of critical aggregates at the weekend. Or you can run checks on a weekly basis after weekly change runs.
You can view the results of the check in the logs in the application log. If the system finds incorrect records, it saves these incorrect records in a new
database table (/BI0/01xxxxxx). You can see the name and size of the table in the log messages.
You cannot transport aggregate checks because the number, type and size of the aggregates in the test and productive systems are normally
different. For this reason, you need to create individual aggregate checks in each relevant system. You can create analogous checks with
identical check IDs in the different systems.
Procedure
You access aggregate check maintenance either from aggregate maintenance (transaction TSDDV) in the Maintaining Aggregates screen (choose Extras
Automatic Check (On/Off/Change) ), or from transaction RSDDAGGRCHECK. The Maintain Aggregate Check: Select InfoCube screen appears.
Selecting Checks
Each check is specified by the InfoCube name and an ID.
1. Enter the name of the InfoCube.
2. Enter a valid check ID:
If you want to display, exit, execute or delete a check that already exists, enter the corresponding check ID. Input help is available for both the
InfoCube name and the check ID.
If you want to create a new check, provide an (unused) check ID. If you leave the field empty, the system automatically generates a new check ID.
Editing Checks
You can create, change, execute, execute ad hoc or delete aggregate checks for an InfoCube.
The following functions are available:
Editing Function
Description
Display
The Display of Check Time for the Aggregate screen appears. The system
displays the aggregate tree of selected aggregates with their check modes and
check times. If characteristic restrictions have been defined for the check for one or
more aggregates, these are displayed subsequently in a dialog box. Choose
confirm to close the display.
Edit
The Check-Time Selection for Individual Aggregate screen appears. The system
displays the description of the check with the check mode settings, check time, and
characteristic relationships. You can change these properties here. If you save the
check, the previous settings are overwritten.
Create
The Check-Time Selection for Individual Aggregate screen appears. The system
displays the aggregate tree with all the aggregates in the InfoCube.
4. Choose Continue .
The system checks whether new aggregates have to be created or whether
check aggregates exist that are no longer needed because the aggregates
have been deleted. The Confirmation of Aggregate Check screen appears .
The corresponding information is displayed in the Check Overview area.
The system does not check an aggregate unless all the check aggregates that
this check requires are filled when the check starts.
If you have selected aggregates with check time Now , the system executes this
part of the check in dialog mode and displays the results afterwards in the
application log.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
The settings for aggregates with check time Now are not transferred to the
definition of the aggregate check.
Delete
Execute
The check you have chosen is executed in dialog mode. The system displays the
results in the application log.
Ad hoc
Evaluating Logs
If the check is executed After Change Run , After Roll Up or After Deletion , you can find the logs for the aggregate check with the logs for the main
process.
If you execute an ad hoc check or execute a check using Now , the system displays the application log automatically at the end of the check.
If the check is scheduled for background processing and is executed in the background, the logs are available in the application log under object RSRV,
subobject AGGRCHECK. The InfoCube name and check ID are recorded in the identifier.
You can also display these logs by choosing Logs on the Maintain Aggregate Check: Select InfoCube screen.
Use
You can determine the check time in the following ways:
Check Time
Description
The aggregates are checked immediately after the change run. The system only
checks aggregates if they have been modified in previous change runs and
switches on the check for these aggregates.
After Roll Up
The aggregates are checked immediately after the data from the InfoCube has been
rolled up. The system only checks those aggregates for which the check is switched
on.
Schedule
You specify that you want to execute the check at a particular time or periodically in
background processing.
If you want to execute a particular check frequently but not on a regular basis,
choose the Schedule check time when you create the check. Save the check. If
you cancel the scheduling you can execute the check at any time using program
RSDDK_CHECK_AGGREGATE_CHECKID in the dialog or in background
processing (see section As Part of a Process Chain below).
Now
The check is started immediately as a dialog process. Aggregates that are checked
with check time Now are not included in the definition of an aggregate check and
are not saved.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Aggregate checks cannot be transported. If you have included a check in a process chain and you transport this process chain, there is the
possibility that an incorrect check may be executed. In different systems, create checks that correspond to one another with identical check
IDs.
Use
You have the following options for checking the aggregates of an InfoCube. The following table provides an overview.
Check Mode
Full
Description
For the full check, the system builds the aggregate again from the InfoCube as an
internal table and compares it with the data of the aggregate in the database, record
by record. This check can take a lot of time, but it offers the highest level of security.
Selection Options
For the restricted check, you can set restrictions for characteristics on the
Characteristic Restrictions When Checking Aggregates screen. You can only
select those characteristics that exist in the aggregate and do not have hierarchies
or fixed value restrictions defined for them. The system runs the check in the same
way as it runs the full check, but only for InfoCube or aggregate data that meets the
defined restrictions. Depending on how strict the restrictions are, this check can be
considerably quicker than the full check.
This type of check is particularly useful in the following case: If the data in the
aggregate only changes in a particular time frame, you can restrict the dataset
considerably by restricting the check to this time frame.
Aggregated
For the aggregated check, the system aggregates all the characteristics in the
InfoCube and aggregate and compares the result of each key figure. This check is
3-4 times quicker than the full check but does not provide the same level of security.
You cannot perform this check for the following aggregates:
For the check with check aggregates, the system creates a check aggregate that is
aggregated using all characteristics. A check aggregate of this type is created for
each fixed value combination that occurs in the aggregates you have selected. The
check aggregates are filled from the InfoCube and are always modified during roll
up or deletion. The check checks that the key figure totals agree. This check is very
quick, but it cannot find every potential inconsistency in the aggregates.
You cannot create check aggregates for the following aggregates:
1.2.3 Non-Cumulatives
Use
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
A non-cumulative is a non-aggregating key figure on the level of one or more objects, which is always displayed in relation to time. Examples of noncumulatives include headcount, account balance and material inventory.
How you model the folder for non-cumulatives in the BI system depends on your scenario. Depending on how often non-cumulatives change, and the total
number of objects for which you want to calculate non-cumulatives, you should choose one of the following two options:
Non-Cumulative Management with Non-Cumulative Key Figures:
If the non-cumulatives change infrequently, you should choose non-cumulative management with non-cumulative key figures.
If you use non-cumulative key figures, an absolute non-cumulative value (the marker) and all non-cumulative value changes are saved in the fact table of the
InfoCube. In this way, the retention and volume of data in the data loading process is optimized. A data record is then only loaded into the InfoCube if a noncumulative changes because of a transaction. Non-cumulatives can then be evaluated at any time in queries, using non-cumulative key figures.
The fact table for the InfoCube with non-cumulative key figures looks something like this (simplified):
Period
Material
Delta
2003001
AAA
1400
2003001
AAA
100
2003002
AAA
-150
2003003
AAA
-50
2003004
AAA
400
2003006
AAA
-300
2003009
AAA
1400
The fact table contains the transaction data. The first record in the table is the initialization. This entry does not remain physically in the table, but it is
available for rebuilding the non-cumulatives.
The last record in this case is the marker (the InfoCube was compressed after this request). The non-cumulative for the period 2003005 is calculated as
follows: 1400 (-300) = 1700. This calculation takes place during the runtime of the query. Non-cumulatives can be displayed with this for time periods for
which transaction data has not been loaded.
The use of non-cumulative key figures is recommended when the amount of transaction data is 20 % less than the granularity of the InfoCube.
Advantages of this Solution:
The fact table is kept smaller.
The history remains.
Disadvantages of this Solution:
More administrative effort is required (for example, the InfoCube has to be compressed more often to keep the marker current).
The query runtime can be affected by the calculation of the non-cumulative.
Deletion of transaction data for material that is no longer current is not possible because deletion cannot be restricted by time.
See also Modeling of Non-Cumulatives with Non-Cumulative Key Figures.
Non-Cumulative Management with Normal Key Figures (Cumulative Key Figures):
If the non-cumulatives change frequently, you should choose non-cumulative management with normal key figures. That is, choose cumulative values.
Absolute non-cumulatives are then retained in InfoCubes for all objects for particular key dates (for example, the end of the month). These absolute noncumulatives can be determined from a DataStore object that is provided with non-cumulative value changes on an ongoing basis.
In this case, non-cumulative calculation takes place at query runtime. The marker is refreshed as a result of compression within the administration of an
InfoCube with non-cumulative key figures.
The fact table for the InfoCube with normal key figures looks something like this (simplified):
Period
Material
(Delta)
Non-Cumulative Value
2003001
AAA
(100)
1500
2003002
AAA
(-150)
1350
2003003
AAA
(-50)
1300
2003004
AAA
(400)
1700
2003005
AAA
(-)
1700
2003006
AAA
(-300)
1400
The fact table contains the non-cumulative values, but no delta (this is here only to make it easier to understand). The non-cumulative value for a specific
period can be determined using a key figure with an exception aggregation over the period.
The values for the key figures are saved in the granularity of the InfoCube. If the amount of transaction data is almost the same as the number of the most
granular InfoObjects (for example, week multiplied by material) then using normal key figures is recommended.
Advantages of this Solution:
The query runtime is not affected by the calculation of the non-cumulative.
The deletion of transaction data for material that is no longer current is possible and easy.
Disadvantages of this Solution:
The fact table becomes larger than it is when using non-cumulative key figures.
In order to properly update postings in the future and the past, the data first has to be loaded into a DataStore object and then into the InfoCube.
"How to..." Paper for Inventory Management Scenarios in BI
SAP provides a document that deals extensively with the technical features of non-cumulative management in BI.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
To display this document in SAP Developer Network (SDN), under https://www.sdn.sap.com/irj/sdn/howtoguides choose SAP NetWeaver 2004 Business
Intelligence How to Handle Inventory Management Scenarios.
Structure
Model of Non-Cumulative Key Figures
A non-cumulative InfoCube is modeled with at least one non-cumulative key figure. These non-cumulative key figures are mapped using one key figure for
non-cumulative changes or two key figures for inflows and outflows. Which option you choose depends on how you want to evaluate the non-cumulative key
figure.
The key figures for non-cumulative value change or for inflows and outflows are normal cumulative key figures that have summation both as aggregation and
exception aggregation. Non-cumulative key figures always have summation as standard aggregation (aggregational behavior on the database, for example,
upon compression or roll-up of aggregates). However, with reference to a time characteristic, they have an exception aggregation (in reporting) that is not equal
to summation, as it would not make sense to cumulate non-cumulatives by time.
Non-cumulative key figures such as Number of Employees are cumulated using characteristics such as Cost Center . However, it is not
meaningful to total the number of employees using different periods. Using the period, for example, you can map the average.
With the cumulative value Sales Revenue , for example, it makes sense to cumulate the individual sales revenues using different periods, and
using characteristics such as Products and Customers .
Example of the difference between non-cumulative and cumulative key figures:
Sales volume (cumulative value):
Sales volume 01/20 + sales volume 01/21 + sales volume 01/22 gives the total sales volume for these three days.
Warehouse stock (non-cumulative key figure):
Stock 01/20 + stock 01/21 + stock 01/23 does not give the total stock for these three days.
Technically, non-cumulatives are stored using a marker for the current time (current non-cumulative) and the storage of non-cumulative changes, or inflows
and outflows. The current, valid end non-cumulative (to 12/31/9999) is stored in the marker. You can determine the current non-cumulative or the noncumulative at a particular point in time. To do so, you use the current, end non-cumulative and the non-cumulative changes and/or the inflows and outflows.
Queries for the current non-cumulative can be answered very quickly, since the current non-cumulative is created as a directly accessible value. There is only
one marker for each combination of characteristic values that is always updated when the non-cumulative InfoCube (InfoCube that includes the non-cumulative
key figures) is compressed. So that access to queries is as quick as possible, compress the non-cumulative InfoCubes regularly (see Compressing
InfoCubes) to keep the marker as up-to-date as possible.
For example, in month 03 the marker is read with three non-cumulative changes for a query. In month 04, the marker is updated so that the
current marker has to be read with only one non-cumulative change for a query in month five. If the marker had not been updated, it would have
had four non-cumulative changes to read.
Data Transfer or Storage, and Aggregation for Non-Cumulative Key Figures
To optimize the data transport and data retention for non-cumulative key figures in the BI system, non-cumulative key figures are treated differently from
cumulative values in both technical data transfer and storage:
Non-cumulative key figures are mapped using one key figure for non-cumulative changes or two key figures for inflows and outflows.
See also: Non-Cumulative Key Figures
A non-cumulative InfoCube has to contain a time-reference characteristic, that means there must be a time-reference characteristic for exception
aggregation of the non-cumulative key figure.
See also: Time Reference Characteristics
A non-cumulative key figure always has a time-related exception aggregation.
See also: Aggregational Behavior of Non-Cumulative Key Figures
In specific cases it may be necessary to determine the validity of a non-cumulative.
See also: Validity Area.
Non-cumulatives are transferred in an initialization run and in the change runs that follow (initialization can also be omitted here).
See also Transferring Non-Cumulative Data into BW
Integration
In query definition and navigation in reporting, there is no difference in the way cumulative and non-cumulative key figures are dealt with. Cumulative and non-
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Integration
The time reference characteristic for an InfoCube when there are several time characteristics in the InfoCube is always the most refined, since all other times
in the InfoCube are derived from this.
An InfoCube contains warehouse key figures that should be evaluated for the calendar month and calendar year. In this case, the calendar
month is the most refined common time reference characteristic.
You can only maintain the time-reference characteristic and the fiscal year variant when updating an InfoCube with non-cumulative key figures. All other time
characteristics are automatically derived from the time-reference characteristic. Therefore, the time-reference characteristic must not be left blank.
There is a difference between complete and incomplete time characteristics:
The complete time characteristics are the SAP time characteristics calendar day (0CALDAY), calendar week (0CALWEEK), calendar month (0CALMONTH),
calendar quarter (0CALQUARTER), calendar year (0CALYEAR), fiscal year (0FISCYEAR) and fiscal period (0FISCPER). They are clearly assigned to a point
in time. Only these SAP time characteristics can be used as time reference characteristics, since you must be able to derive time characteristics
automatically from the most detailed time characteristic must be possible with the non-cumulative folder.
Incomplete time characteristics, such as 0CALMONTH2, 0CALQUART1, 0HALFYEAR1, 0WEEKDAY1 or 0FISCPER3 can be used in a non-cumulative
InfoCube but cannot be a time reference characteristic, since they are not assigned to a specific point in time.
The following graphic gives an overview of the hierarchy for SAP time characteristics:
If you have a non-cumulative for a week and for a month in the same InfoCube at the same time, the roughest common time characteristic is
calendar day. The time characteristic calendar day must be included in the InfoCube, so that it can function as a reference characteristic for
time-based aggregation.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
In the following example, the difference between the FIRST aggregation and the LAST aggregation is made clear. If one considers, for
example, the aggregated values for 02.02.02, then the non-cumulative is considered 90 with the FIRST aggregation, which is the noncumulative without receipts. The non-cumulative with the LAST aggregation is considered 110, which is the non-cumulative from 90 plus the
receipts of 20.
There are two possible kinds of aggregational behavior for non-cumulative key figures:
The standard aggregation specifies how a key figure is compressed using all characteristics (but not time characteristics).
The exception aggregation specifies how a key figure is compressed using all time characteristics.
Exception aggregations with regard to time
Every key figure has a standard aggregation and an exception aggregation. Non-cumulative key figures always have summation as standard aggregation,
whereas time characteristics have an exception aggregation of not equal to summation.
The non-cumulative key figure Warehouse Stock is aggregated using Summation for characteristics that are not time-related such as
Articles or Stock . For time characteristics such as Calendar Month , however, the non-cumulative key figure Warehouse Stock has the
exception aggregation Last Value .
Meaningful aggregations for non-cumulative key figures are primarily Average Weighted According to Calendar Days (AV1) and Last Value (LAS).
Additional, possible exception aggregations for non-cumulative key figures are listed in the following table.
Exception aggregation for non-cumulative key figures
Technical name
Description
AV1
AV2
Average (weighted with the number of working days according to the factory
calendar with the ID 01)
FIR
First value
LAS
Last value
MAX
Maximum
MIN
Minimum
The time at which non-cumulatives were posted for different materials is displayed in the following graphic. The evaluation results for the noncumulative for Material 1 , for exception aggregation Average , and the exception aggregation Last Value , are listed in the following tables,
where they are displayed once by calendar month and once by calendar day.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Last value
January
100
110
February
140
160
March
150
140
Last value
01.01.2000
90
90
02.01.2000
90
90
03.01.2000
90
90
04.01.2000
90
90
...
...
...
09.01.2000
90
90
10.01.2000
90
90
11.01.2000
99
99
...
...
...
Features
Validity area: The differing time-based validity of non-cumulatives is mapped using a validity area. The validity area describes in which time period noncumulatives have been managed.
Normally this time interval is valid for all records for the InfoCube, for example for all cost centers, materials, etc. The validity interval is comprised of the
minimum and the maximum of all postings.
If the first values are posted for a product group on 12.31.1999 and a non-cumulative change was posted for the last time on 3.10.2000, the validity area is the
time interval from 12.31.1999 until 3.10.2000.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
In certain cases it may be necessary, however, to manually adjust the validity, for example when a characteristic value is only valid for a restricted time
period. In this case, you should define a validity area, especially to be able to guarantee correct calculation of averages.
Another example: If the data from various sources systems has been loaded to the InfoCube at different times, it can be sensible to keep the validity area for
the respective partial areas.
In the following example, the receipts for plants A, B, and C are displayed. The virtual entries are indicated in a special way in the query.
Looking at plant B, then there are only receipts in March and April. For January and February, the virtual entry 120 is set, because this is the
non-cumulative excluding the receipts from March. For May the virtual entry 150 is set, which is the final non-cumulative for April. In the totals
row, the total of virtual entries and the actual non-cumulatives is calculated.
SAP recommends that when you create a non-cumulative InfoCube, you use validity-determining characteristics only in the specified cases. If
you have too many validity-determining characteristics or a validity-determining characteristic with a lot of characteristic values, performance
decreases considerably.
If you decide later that you require more validity-determining characteristics, you can modify the selection using the report
RSDG_CUBE_VALT_MODIFY. In this report, the non-cumulative InfoCube is only changed to the extent that the new validity-determining
characteristics are selected and the validity table is reconstructed. The structure of the non-cumulative InfoCube remains the same. You do
not have to reload the transaction data for it.
For each combination of characteristic values for the validity-determining characteristics, the validity area is, by default, the interval between initialization (or
the first change in non-cumulative) and the last posted non-cumulative change for this combination. That is, the validity area is created from the posting data
from when the data was loaded.
When evaluating in reporting, the non-cumulative for the requested time period is evaluated using the current final non-cumulative and the corresponding noncumulative changes (meaning that the non-cumulative is defined at every moment within the validity area). In this way, the non-cumulatives for time periods
for which no change was posted are also identified.
Assuming that every plant delivers its data to the BI System separately at different times, the characteristic Plant has to be validitydetermining. If you also assume that the characteristic values are Boston Plant, Dallas Plant, and San Francisco Plant , the validity intervals
appear as follows:
For the Dallas plant and the San Francisco plant, the validity table is maintained as follows:
Plant
From (fixed)
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
To (fixed)
Dallas
001.2000
003.2000
San Francisco
001.2000
002.2000
January
February
March
Result
Comments
700
700
900
766,66
( per month)
200
200
400
(SUM)
200
300
(300)
250
(Not 266.66!)
50
100
(0)
150
(SUM)
In the query, the average amount from January and February is given as the result for the San Francisco plant since only these time periods
have been defined as valid. If a validity area had not been defined, the average given would have been too high (266.66).
See also:
Maintaining the Validity Area
Procedure
The minimum and maximum loaded values of the time reference characteristic (for every feature of the validity table) are used as standard as a threshold for
the validity period toolbar. You can change the validity of the individual characteristic values independently of the data loaded.
1. Call transaction RSDV
2. You get to the window Report on Validity Slice Maintenance . Specify the InfoCube here, for which you want to maintain the validity area, and select
Execute .
3. In the window Editing the Validity Slice for Non-cumulative InfoCubes, select Display/Change .
4. You can now determine, for each individual feature, the update method for the lower (from mode) and upper (to mode) limits of the validity area. You
have the following selection options:
Feature " "
This mode corresponds to the standard setting, meaning that for every feature, the minimum or maximum (time-related) loaded value is taken.
The current, valid value is displayed in the From (Fixed) (for the start value) or To (Fixed) (for the end value). You cannot change these
values.
Feature "F"
Here you can specify a fixed start or end date. Enter the values you want in the fields from fixed (for the start value) or to fixed (for the end
date).
The input format must be adjusted to the corresponding time characteristic.
If 0CALDAY is the time-reference characteristic, then the input format has to correspond to the date format day, month, year as specified
in the global setting, for example, 01.01.2000 or 2000/01/01. If 0CALMONTH is the time-reference characteristic, then the input format
must contain month and year correspondingly.
Feature "R"
Here you can specify the interval thresholds relative to the current time. Enter, as whole numbers (positive or negative), the values you want in
the field from rel. (for the start value) and to rel. (for the end value). The specified values describe movement in units with time reference
characteristics with reference to the current time.
If the number +1 is entered in the field rel. to, for time-reference characteristic 0CALDAY, this means that the validity period threshold
is the current next day. However, if 0CALMONTH is the time-reference characteristic, then the threshold for the validity period is the next
current month.
5. Save your entries.
Result
The modified validity table is taken into account in reporting.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
An initial non-cumulative is firstly loaded for plant A. Non-cumulative changes are then posted for this plant. Subsequently, a non-cumulative
change is posted in the InfoCube for plant B. Data for plant C is then posted in the InfoCube with an initial non-cumulative at first and then with
non-cumulative changes.
Only non-cumulative changes, not initial non-cumulatives, can now be posted for all three plants.
Refer to the SAP Notes for defining validity tables. It is not normally necessary to include additional validity-determining characteristics in the
validity table.
Activate the InfoCube.
5. Loading data:
If you want to load an initial non-cumulative, load this first and then load the non-cumulative changes.
If a non-cumulative change is updated when loading data for the first time, no further initial non-cumulative can be loaded for this characteristic
value.
When loading data, the validity areas for the individual validity-defining characteristic values are automatically updated.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
When loading data, the validity areas for the individual validity-defining characteristic values are automatically updated.
6. Query definition:
In the InfoCube both the non-cumulative key figure and the non-cumulative change, or in- and outflows, can be selected for query definition.
With evaluations in reporting, there is no difference in the way cumulative and non-cumulative values are handled. Cumulative and non-cumulative key
figures can be evaluated at the same time in a query, since the key figures are automatically aggregated correctly.
Note: Even in cases where you havent included the reference characteristic in the query, an aggregation is implicitly executed using the
reference characteristic.
Example:
Your query has a key figure with the exception aggregation last value (LAS) and the reference characteristic 0CALDAY. You have loaded the following data
into your InfoCube:
This result comes from the fact that the normal aggregation is carried out with respect to the plants. The exception aggregation now has an influence in the
construction of the result. As only one value was booked for plant 1 for the last date (16.06), only the value for plant 1 is included in the result, that is, 15.
Excurs.: Mod. Non-Cumul. Key Figs with Differing Time Ref. Chars
If an InfoCube contains a non-cumulative key figure, then a time-based reference characteristic for the exception aggregation of the non-cumulative key figure
must exist. There can be several time characteristics per InfoCube, but only one time reference characteristic. This means, that the time-based reference
characteristic is the same for all the non-cumulative key figures of an InfoCube.
Different Time Reference Characteristics
If you have characteristics for which you manage non-cumulatives and which refer to a data object in different stages of editing, such as delivery stock, order
quantity and billing quantity, then these non-cumulative key figures all have differing time references. Therefore, these non-cumulative key figures cannot be
evaluated like this in a joint InfoCube.
Modeling Proposal
You can map the different time references using a characteristic transaction.
You then have a key figure non-cumulative with the most detailed time reference characteristic calendar day, and a characteristic transaction with the
characteristic values delivery, order, billing. This means you can store the non-cumulative for the differing transactions in one, single InfoCube.
For evaluation in reporting, you can then use the restricted key figure non-cumulative that is restricted to one of the characteristic values of the transaction
characteristic, as a structure element in the query definition.
In this way, you can evaluate the delivery stock, the order quantity, and the billing quantity using restricted key figures.
By doing this, you can minimize data transfer and storage, and reduce the number of key figures.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Compression:
Compress all requests in the non-cumulative InfoCube, or at least most of them.
The performance of a query based on a non-cumulative InfoCube depends heavily on how the InfoCube is compressed. If you want to improve the
performance of a query of this type, first check in so far as this is possible - whether the data in the InfoCube should be compressed. You should always
compress data when you are sure that the request affected will not need to be deleted from the InfoCube.
Validity Table
Use as few validity-determining characteristics as possible.
The number and cardinality of the validity-determining characteristics heavily influences performance. Therefore, you should only define characteristics as
validity-determining characteristics when it is really necessary.
Time Restrictions in the Query
As far as possible, restrict queries based on non-cumulative InfoCubes to time characteristics.
The stricter the time-based restriction, the faster the query is generally executed, as the non-cumulative is reconstructed if the number of times is smaller.
Time Drilldown in the Query
If you no longer need the average, split a query on a non-cumulative InfoCube (which contains both key figures with LAST aggregation and key figures with
AVERAGE aggregation) into two queries.
With non-cumulative key figures with the exception aggregation LAST, the time characteristic included in the drilldown makes a difference to performance. If,
for example, both Calendar Day and Calendar Month are included in the InfoCube, drilldown by month is faster than drilldown by day, because the number of
times for which a non-cumulative has to be calculated is smaller.
For the other types of exception aggregation (average, average weighted with factory calendar, minimum and maximum), this rule is not valid as in these
cases, the data is always calculated on the level of the most detailed time characteristic first before exception aggregation is performed.
Totals Rows
Hide the totals row in the query when not required.
Depending on the type of aggregation being used, the calculation of totals rows can be very time-consuming.
Implementation Considerations
The SAP Easy Access initial screen was introduced in BW 3.0A SP11 = BW 3.0B SP04.
Features
Query Monitor
For more information, see Query Monitor.
Trace Tool Environment
For more information, see Trace Tool Environment.
This environment replaces and enhances the functions of the OLAP trace tool (see OLAP Trace Tool (Old)).
OLAP: Cache Monitor
For more information, see OLAP Cache Monitor.
ICM Monitor
For more information, see
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
The trace tool environment replaces the OLAP trace tool (transaction codes RSRTRACE, see OLAP Trace Tool (Old), and RSRCATTTRACE)
and provides all of the functions necessary for a clearly enhanced application area.
Application Area
The application area encompasses a particular part of the BI system where user actions can be logged. Assigning to a particular application area is userdependent (see Administration).
Logging and Playing Back Traces
It is useful to log a trace in the following cases:
to conserve and analyze errors and questionable process flows
to repeatedly execute selected navigation sequences (such as query navigations)
Users who want to record a trace must be activated before recording starts recording and be deactivated again after recording. Note that the lifetime of a trace
depends on the lifetime of the session of the processes to be recorded. As soon as a session is ended, the system also closes the trace.
For more information, see Logging User Actions.
The display of a trace depends on the controlled execution of the recorded sequence of program calls. A user can either execute the trace completely or stop
execution at a given location, in order to branch to the ABAP Debugger directly at this location. The latter option is recommended for a detailed analysis of the
recorded processing (for example, for an error analysis).
For more information, see Execute Logged User Actions Again .
Processing of Automatic Regression Tests
With automatic regression tests, you can monitor the quality of the system over a longer period of time (for example during the cycle of a support package).
A wizard helps you to create automatic tests (called CATT traces). Users are guided through the individual definition steps: They make decisions regarding the
storage of the test reference data and the assignments for the data structures to be tested, and store descriptions of the navigation steps.
When the tests are executed, the CATT traces are executed internally and the current results are compared once with the test reference data stored in the
definition. If the traces are displayed successfully and the current results values agree with those of the test reference data, the test was successful. In all
other cases the test was not successful. The system provides a user interface for displaying the tested data contents.
To combine a larger number of CATT traces, test packages can be generated that can be restricted according to certain selection criteria. Test packages can
be scheduled as repeatable test jobs for background processing. The system stores logs relating to the state of the test run in log files (job log and application
log). The system writes the results of the tests directly to InfoObjects or InfoProviders as master data or transaction data and immediately makes them
available for reporting.
Integration
Depending on the respective application area, use the trace tool environment in conjunction with various tools from the BI system.
In the Reporting, Planning and OLAP Technology application area, use the trace tool together with query execution.
Features
The trace tool environment allows you to work with traces, test packages and test jobs. It includes the following functions:
You access the interfaces of the individual task areas using the navigation window. These are assigned to the following areas:
Functional areas for the trace tool environment
Area
Use
Trace tool
Administration
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
See also:
Trace Tool
CAT Tool
Administration
In the Reporting, Planning and OLAP Technology application area, the trace tool allows you to save selected executed queries and the
subsequent query navigations, as well as certain actions in the planning modeler, long-term in the system.
Saving and analyzing error patterns
When logging traces, if error situations occur you can save them together with the steps that led to these situations arising.
In the Reporting, Planning and OLAP Technology application area, the following error situations , for example, may occur: System error,
terminations, ambiguity concerning the correctness of query result values. Some error situations only occur after a series of special query
navigations.
For error situations that are hard to reset, you can considerably reduce the support effort required by logging a trace that contains all of the actions up until the
error occurred.
Reusability and ability to schedule a trace
You can control the execution of traces that have been logged, that means you can execute them regularly or recurrently as a background job that can be
scheduled.
In the Reporting, Planning and OLAP Technology application area you can use this functionality to fill the OLAP cache systematically and
automatically. This allows you to increase the read performance of BI queries with respect to selected executions of queries and query
navigations.
Integration
You can use the CAT tool to develop CAT traces from standard traces. For more information, see Cat Tool.
Features
User Activation
In the
User Activation area, you can activate or deactivate yourself, or as an administrator you can activate or deactivate other users, for the logging of a
trace. You can see all of the users currently activated for the logging of a trace in a table. For more information, see Logging User Actions.
Trace
In the Trace area, you can select a trace so that you can see or edit its properties in trace management, or execute or delete it.
You fill the Trace (ID) field using input help. Note that the system only displays the traces for the current application area (see Administration).
In the History of Last Trace table, the system displays the traces that you last selected, created or edited. You can double-click on a table entry to select a
trace.
Choose
Display or
Execute ) or delete (
Delete ) traces.
Trace Collection
In the Trace Collection area, the system shows a selection of traces. You can use the selection criteria Trace User, Application Layer and Trace Type to
restrict the display. Note that the system only displays the traces for the current application area (see Administration). The selection lists for Application
Layer and Trace Type each display the possible selections for the currently selected user.
Double-click on a table entry to access the trace management (see Maintaining Trace Properties).
In the Trace Collection area you can play (
Execute ) or delete (
Delete ) traces.
See also:
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
In the application area Reporting, Planning and OLAP Technology , you can log selected executions of queries and the subsequent query
navigations, as well as particular actions, in the planning modeler.
You can trace the following:
The execution of the query in the query monitor (see Query Monitor)
The execution of the query on the Web or in the SAP Enterprise Portal
The execution of the query in the BEx Analyzer
The data request of the planning modeler
The data store of the planning modeler
Integration
You can rerun and control traces, as well as use them to define automatically executable regression tests. For more information, see Trace Tool Environment.
In the application area Reporting, Planning and OLAP Technology , you can jump directly to the query monitor using the trace tool.
Prerequisites
The user has sufficient authorization for logging traces (see Trace Tool Environment Authorizations).
The user executes actions that can be logged by the trace tool.
Features
A successful trace logging comprises the following steps:
Activating the trace logging
Executing the actions to be logged
Deactivating the trace logging
It is irrelevant if the activation or deactivation and the execution of the actions takes place in the same mode, or in two different modes.
During activating and deactivating a trace logging, the following cases are supported:
Users want to activate or deactivate themselves.
An administrator wants to activate or deactivate a particular user.
You can determine for each user, if, in addition to logging user actions, they are to be activated or deactivated for generating tests.
Activities
Activating the Trace Logging
1. In the navigation window from the Trace Tool functional area, choose the
User Activation area.
2. The Trace User field is initialized with the current user name.
If, as an administrator, you want to activate another user, enter this name in the Trace User field.
3. If the user intends to convert a trace into an automatic test, set the indicator for the Activation for Test Generation option.
4. To activate yourself as a user, choose
Activate USERNAME .
If, as an administrator, you want to activate a user whose name you have entered in the Trace User field, choose
Activate .
As a result of your activation, the system displays the activated user as well as the time of activation and the selection for the test generation in the user
table.
User Interaction
1. To log a trace in the application area Reporting, Planning and OLAP Technology , call the required environment for executing the query or modeling a
planning application:
Query monitor
BEx Analyzer
Web browser for executing the query on the Web
Planning modeler
2. Perform all the actions (query navigations or actions in the planning modeler) that the system is to log.
Deactivating the Trace Logging
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
If the current session ends during the logging of user actions, the system automatically shuts the trace logging. Therefore, for example,
executing the query monitor again, or refreshing the Web browser window interrupts logging. If further interactions take place, a new trace
occurs.
If, on the contrary, in a second session, the user for the trace logging is deactivated and then reactivated, and this then executes further
interactions in the first session, a new trace occurs.
Description
Trace GUID
32-digit code that cannot be changed during the lifetime of the trace
Trace (ID)
Key with a name, also generated by the system according to a certain naming
convention, but can, however, be changed by the user
To be able to find a trace and, if necessary, edit, play, or delete it, you must first determine which key the system generated.
You either use the overview of all trace loggings in the Trace Collection area, or the selection of individual traces in the Trace area.
To make it easier to find the trace again, you can change the system-generated trace (ID).
Prerequisites
You have logged the required trace (see Logging User Actions).
Activities
To determine the trace ID of a current trace logging, proceed as follows:
1. In the navigation window, from the Trace Tool functional area, choose the Trace Collection area. In the initial status, the system fills the trace user field
with the current user name for the restriction.
2. Add the Standard Trace trace type as the restriction criteria for the displayed trace from the trace list.
Each newly logged trace is of type Standard Trace . For more information, see Maintaining Trace Properties.
3. The system displays a list of the existing traces. The list is sorted in descending order, according to date. Accordingly, the first entry in the list is a link to
the current trace.
4. To display all the properties of the logged trace, change to the display of the trace properties in one of the ways described in the following:
a. In the trace list, double-click on the required entry.
b. In the trace list, select the required entry and choose
Display .
c. Goto the Trace area and restrict the selection to the required trace.
Prerequisites
You have logged the required user actions (see Logging User Actions).
You have determined the required trace (see Determining a Logged Trace).
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Features
The trace tool supports different process flows when executing a trace:
You can directly execute a trace. This option is suitable for regression tests, for example.
You can cause the play of the trace to be interrupted before particular program objects. This option is particularly useful if you want to analyze errors. The
ABAP Debugger appears directly before the selected program object is called.
Since the trace tool only plays those program objects that belong to the application layer that is currently chosen, you can only select to play these
objects.
On the Execution of Traces screen, the system in the Trace screen area shows the standard information for the trace ( Trace GUID , Trace(ID) ,
Description ).
In the Process screen area you can select a Process Mode . This specifies how the trace is to be handled when it is played:
The Play Mode (Play Trace) mode causes the trace to be played. For standard traces, this is the only possible selection.
The Check Mode (Test Trace) mode starts the regression test. This mode is only available for CATT traces.
The following tab pages are displayed in the lower screen area:
On the Display Settings tab page, you can set the display mode. You can choose:
Debugging at Call Position : The system interrupts the execution of a trace at a recorded program object.
Debugging at Check Position : The system interrupts the execution of a trace at a checked program object.
On the Play Settings tab page, you can select from trace-type-dependent settings that influence the execution of a trace.
For traces from the application area Reporting, Planning and OLAP Technology , the following settings are made:
Read Mode : Read All, Read During Navigation, Read During Hierarchy/Navigation. More information: Read Mode
Cache Mode : Without Cache, With Cache (Initial), With Cache (Filled). For more information, see Cache Mode.
Aggregation Mode : Without Aggregate, With ROLAP Aggregates, with BI Accelerator Index. For more information, see Performance
Optimization with SAP NetWeaver BI Accelerator and Performance Optimization with Aggregates.
Use
Execute to start the execution of a trace. The system documents the process flow step by step, using corresponding messages. The messages are
displayed in a window in the lower screen area.
Activities
1. You can play a trace from the Trace or Trace Collection areas of the Trace Tool functional area. Follow these steps:
a. In the navigation window, from the Trace Tool functional area, you can choose the Traces area. This allows you either to enter the name of the
trace to be played in the Trace (ID) field, to select it from the input help, or to select it from the appropriate line in the History of Last Traces
table to transfer it.
b. In the navigation window, from the Trace Tool functional area, you can choose the Trace Collection area. This allows you to display a selection of
traces that are restricted by user, application layer, and trace type. When you have specified the selection criteria, press the enter key. The system
displays the traces that correspond to the selection criteria. Select the required trace.
2. The Execution of Traces screen appears in the following ways:
a. Choosing
b. Choosing
Display or
Change takes you to the Trace Attributes screen that is used for the maintenance of a t race. Choosing
takes you directly to the Execution of Traces screen .
Determine the settings for executing a trace that are described in the features section.
3. To execute the chosen trace, you have the following options:
Execute
Integration
You can access the maintenance of trace properties for a particular trace, using the Trace and Trace Collection areas of the trace tool.
Prerequisites
The required user activities were logged and the corresponding trace determined. For more information, see Logging User Actions and Determining a Logged
Trace.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
Features
In the upper screen area, the system displays the trace GUID, trace (ID) and description of the chosen trace (see Determining a Logged Trace).
You can change the trace (ID) and description in the change mode:
Changing trace (ID) and description
Property
Description
Trace (ID)
Key with name that the system generates using the following pattern:
systemname/xxxxxx (xxxxxx represents a 6-digit sequence number).
You can change the generated trace ID to any 20-digit term.
Note that the trace (ID) must be unique within the system.
You can reuse the trace (ID)s of deleted traces.
Description
Free text
Description
Trace type
Type of trace
Standard trace
CATT trace: Special standard trace used to generate automatic tests
OLAP trace: CATT trace in the Reporting, Planning and OLAP
Technology application area
Application area
Application layer
Clearly definable call layer during process editing, within an application area.
In the Reporting, Planning and OLAP Technology application area there are the
following application layers:
BI BEx Request
BI Business Explorer
BI Open Analysis Interfaces
BI Aggregation Layer
BI Core Calculation Layer
BI Data Access Layer
BI Planning Layer
BI BPS Layer
Author
SAP system ID
Current release
Patch level
Last changed by
In the change mode you can choose the application layer here.
The following image illustrates the process of executing a trace using an example from the Reporting, Planning and OLAP Technology application area:
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
While executing a trace, selecting a certain application layer allows the extent of the execution to be controlled by logged program objects.
If an upper application layer, for example BI Business Explorer , is not interesting because executing the logged program objects requires a
great deal of time or certain authorizations, you can hide this layer and let the trace run on a lower application layer, for example BI Core
Calculation Layer , with the logged inbound parameters.
Logged Program Objects Tab Page
The Logged Program Objects tab page displays all the important information about all of the program objects whose interface parameters (in the Reporting,
Planning and OLAP Technology application area while a query was executed) were logged and lets them be played again.
In this sense, program objects are:
Function modules
Static methods
Instance methods
By default, the system only displays those program objects in the call table that belong to the application layer set for the trace (see Basis Attributes tab
page).
Change Call View gives you the option of displaying all of the logged program objects for this trace.
Using Parameters , the system displays the values of the parameters for each program object as XML.
Double-click on a table entry to start the Trace Execution :
If the program object of this table entry is in the application layer chosen on the Basis Attributes tab page, the ABAP Debugger appears. The system
stops directly before calling the program object.
If the program object is in a different application layer, the system executes the entire trace. The run is documented in corresponding messages.
The information in the table includes:
Special information on the Logged Program Objects tab page
Column
Description
Sequence number
In the Reporting, Planning and OLAP Technology application area, the sequence
number specifies the sequence regarding the query navigation. See the Sequence
Descriptions tab page.
Item
Sequential number of program objects together with the ID for the call.
Program type
Program module
Framework program
Runtime
Layer depth
Application layer depth: Current position (nesting depth) within an application layer.
Layer depth = 0 is the entry point for playing on the respective application layer.
Stack depth
Stack depth: Current position (depth) in the call list, relating to the highest call
module.
Application layer
Test object
The following image illustrates the determining of layer and stack depths for the function modules FU1, FU2 and FU3 using an example from the Reporting,
Planning and OLAP Technology application area:
This tab page only applies to CATT traces (not standard traces).
It is a prerequisite that the user is not only activated for logging the trace, but also for generating the test, see Logging User Actions.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.
In addition to the information on the Logged Program Objects, tab page, the table includes the following:
Special information on the Testable Program Objects tab page
Column
Description
Check-position type
Developer information:
In the Reporting, Planning and OLAP Technology application area, this could be one, or for Web applications, more BI queries.
Sequence Descriptions Tab Page
The Sequence Descriptions tab page provides you with an overview of all of the logged sequences for this trace run.
In the Reporting, Planning and OLAP Technology application area, a sequence corresponds to a navigation step in the BI Query.
In the change mode you have the option to indicate the sequences with a descriptive text.
PUBLIC
2014 SAP SE or an SAP affiliate company. All rights reserved.