cancel
Showing results for 
Search instead for 
Did you mean: 

HANA Live - Attribute Views??

Former Member
0 Kudos

Hello Experts,

We recently installed HANA Live content, virtual data models, HANA live browser (all the good things) and started replication with SLT in side-by-side scenario with Enterprise edition.

I was surprised to see, there are no attribute views which came as part of virtual data models. Everything is a calculation view and seems to have number of joins (at times 10+) for simpler data models.

I'm wondering why SAP didn't deliver not a single "attribute view" and has modelled everything in calculation view.

Even to get some of the basic master data, we need to do enhancement and could be a pain.

Any insight?

Thanks.

Abhijeet

Accepted Solutions (1)

Accepted Solutions (1)

former_member184768
Active Contributor
0 Kudos

Hi Abhijeet,

I do not have any experience on HANA Live. But from the perspective of modeling, I can say that even the current modeling from SAP regarding BW objects to HANA generate Analytic views (for DSOs) and Calc views (for InfoCubes).

Attribute views are good from the perspective of maintenance and development standards / reusability, but from the performance perspective, when the attribute views are used in Analytic views and Calc views, they are resolved to table joins. Hence it doesn't matter if the HANA views use Attribute views or tables in the view definitions.

Regarding the joins, I think not all the joins will be executed when you fire a query against the model. You may see that yourself in the visual plan.

Well, I am sure you will get more technical reasoning behind this modeling from SAP itself, I just put in my two cents..

Regards,

Ravi

Answers (4)

Answers (4)

Former Member
0 Kudos

for some reason, after launching the view for two days the results now got better, to around 2 seconds for 1000 rows. I increased the entries number to 10000 and it's taking 7 seconds. Still not really good for an In-Memory.

I checked the other HANA Live views and they give me roughly the same numbers...

I switched from SQL Engine to BLANC, but the result is still the same.

Jonathan_Haun
Participant
0 Kudos

Sergio,

Take a look at the following presentation. Near the end of the presentation there are a few slides on how to invoke the "Visualize Plan" utility for a SQL statement in HANA Studio. Can you post your "Visualize Plan" results?

lbreddemann
Active Contributor
0 Kudos

Interesting comment:


Sergio Alsis wrote:

for some reason, after launching the view for two days the results now got better, to around 2 seconds for 1000 rows. I increased the entries number to 10000 and it's taking 7 seconds. Still not really good for an In-Memory.

What would be good for an in-memory database?

What is your expectation build on?

- Lars

RonaldKonijnenb
Contributor
0 Kudos

On 3.5 billion/scans/s/core, 12,5 - 15 m aggr/s/core?

lbreddemann
Active Contributor
0 Kudos

Ok, cheeky answer, but aggr/s/core is not a response time.

It's a very low level performance characteristic that is only partly relevant for information models.

For complex information models data is aggregated, projected and transformed multiple times in a stacked fashion. Stacked means that some of the processing needs to wait for earlier steps to be finished leading to staged processing scheme. In short: you won't get all cores to work in parallel in all processing steps.

RonaldKonijnenb
Contributor
0 Kudos

Great thread guys. I am wondering though why SAP does not come back with an answer on the question which is raised a couple of times:

Why are Hana Live views build exclusivley in graphical calculation views.


I'm guessing there is no answer to the question?

former_member194613
Active Contributor
0 Kudos


the answer was given above

Jonathan_Haun
Participant
0 Kudos

My guess (Just a guess because I can't read their mind) is the following:

Because the Information Views are built directly on normalized ECC tables, the developers found that most of the denormalization of the data would require the use of calculation views and their provided functionality. They then made the decision to build everything from calculation views for consistency.


My Thoughts:

I looked through most of the Hana Live ECC content and found that it would have been difficult to conform all of the tables into a standard multidimensional HANA model. (IE Attribute Views and Analytic Views only). With that said, there are still areas where they could have used Attribute Views. For example, KNA1_COM (Customers) and it's adjoining tables could have been conformed into a Customer Attribute View. The same could be said of CEPC (Profit Center) and many other calculation views. I guess in the end it did not matter because there are multiple calculated columns being used and most of the processing would have remained in the Calculation Engine regardless of which view type was used. I also noticed that some of the calculation views were configured to execute in the SQL Engine for better performance. This option only appears to be available in a Calculation View. The use of more Attribute Views would have been nice but only as it relates to auto-generated Universes in BOBJ 4.1.

RonaldKonijnenb
Contributor
0 Kudos

It must be me then.

RonaldKonijnenb
Contributor
0 Kudos

Tx Jonathan. I believe you're right.

former_member194613
Active Contributor
0 Kudos

This is the Hana Live design criteria right from the beginning:

However, back to the original question: Hana Live views are build exclusivley in graphical calculation views, even scripted views are exceptions, analytical and attribute views are not allowed.

All the thought might have been made, but were finished before the program started in Jan 2012.

I also noticed that some of the calculation views were configured to execute in the SQL Engine for better performance.

Good, this is the hana live performance optimization and should be default. It is available only for calc-views.

best regards      Siegfried

Former Member
0 Kudos

Having used / enhanced HANA Live (along with BW on HANA) for more than 3 months now -

The design for key and texts (For HANA graphical calc views) is extremely horrible. From first levels (COM structures) e.g. VBAK_COM, the tech names of fields e.g. BUKRS ( Company Code) are lost. The descriptions are used as technical names. In certain types of enhancements, the tech name only comes and description doesn't come over.

On top of it, in order to make it business friends (and not have direct ECC descriptions for fields), there are extensive renaming in aggreagation levels of calculation views. As soon as you copy and try to enhance those views, all renaming information gets lost and standard universe / BOBJ reports break.

Also from security perspective, if we join data from BW info objects, the names become highly inconsistent.

If they would have used attribute views, we could have consumed master data in HANA in BW info objects. But its not possible because of absence of attr views. Causing master data duplication in BW and HANA.

Long way to go.

Thanks much for great discussion and some responses from SAP.

Abhijeet

Former Member
0 Kudos

It must be me either .

It is sad to read such interesting discussion without having a final answer.

HANA modelisation is a tuff subject as the reusability of lower layer isn't part of HANA live (however why is there any AtView/AnView ?) and the mecansim of technical name / field description isn't really efficient.

It would have been interesting to get some example from SAP in the HANA live, or the reasons of this choice to improve our modelisation for ourS customers...

Best regards,

Emilien

Former Member
0 Kudos

For those of you who has direct experience with HANA Live installations, what are you average response times for the HANA Live views executed directly in HANA Studio or through a BOBJ report?

We have found that our average response time across multiple BO Tools (Webi, Design Studio, Crystal Reports) is 12 to 16 seconds, which is very slow. Are we missing some config piece?

We ran the views directly in HANA Studio and the average time is 12 seconds, when executed the first time, which is also quite slow for a very limited amount of data that we have.

Thanks,

Sergio

Jonathan_Haun
Participant
0 Kudos

Serigo,

If you have a limited amount of data you should not see response times in this range. Although, I have seen slower response time when the average transaction tables are > 2 Million records. The HANA live views are built as calculation views that, for the most part, execute in the Join Engine by design. If I build something custom using Analytic Views, the response times are often better because the processing takes place in the OLAP engine predominantly.

It would help if you could share more information about your version of HANA and HANA Hardware? Are you running true Suite on HANA or using the sidecar option with SLT to provision HANA? Which HANA Live business content are you using, etc...

justin_molenaur2
Contributor
0 Kudos

I have found HANA Live models to be models in a 'rough draft' state in general. As Jonathan mentioned, some of them are not exactly designed to take advantage of performance optimizations in calc views, and most times you will get better performance in an AV.

Jonathan - I would challenge to find an ECC transaction table that is LESS than 2 million records

Regards,

Justin

Former Member
0 Kudos

Hi, thanks for the replies guys.

I as well replied with this info in another post, but will copy it hear, so maybe will get to the solution quicker

We are using the sidecar scenario, where through SAP LT we bring the ECC 6.0 data to HANA. It's normal ECC, not ERP on HANA

It's a version 1.0 of HANA, SP05, the latest I beleive

It's a demo system, so don't have millions of data records. Just about 8000 Sales Order for example

the view I'm trying is SalesOrderValueTrackingQuery

the report name is "Sales Amount Analysis", which is SAP Design Studio based dashboard

These are the tables used:

                                                           

SAP_ECC.ADRC
SAP_ECC.MAKT
SAP_ECC.PA0001
SAP_ECC.KNA1
SAP_ECC.T001
SAP_ECC.T006
SAP_ECC.TSPA
SAP_ECC.TSPAT
SAP_ECC.TVAK
SAP_ECC.TVAKT
SAP_ECC.TVKO
SAP_ECC.TVKOT
SAP_ECC.TVTW
SAP_ECC.TVTWT
SAP_ECC.TSAD3T
SAP_ECC.VBAK
SAP_ECC.VBAP
SAP_ECC.VBEP
SAP_ECC.VBFA
SAP_ECC.VBKD
SAP_ECC.VBPA
SAP_ECC.VBUK
SAP_ECC.VBUP
SAP_ECC.VEDA

That's what I got after running it in SQL

------------------

Statement 'SELECT * FROM "_SYS_BIC"."sap.hba.ecc/SalesOrderValueTrackingQuery"'

successfully executed in 12.936 seconds (server processing time: 12.444 seconds)

Fetched 1000 row(s) in 1.969 seconds (server processing time: 11 ms 886 µs)

Result limited to 1000 row(s) due to value configured in the Preferences

--------------------------------------

As you I ran it again and again it takes almost 13 seconds. So either it get's unloaded every time I get disconnected or something different is the issue here.

also, there are 8120 records in the main underlying table VBAK.

the memory usage is also very low

I think we might be missing some basic setting here

Thanks,

Sergio

Jonathan_Haun
Participant
0 Kudos

I would assume that something is very wrong with the SAP HANA setup or configuration. Do you know which certified server was used (found on the PAM)? Was this implemented under SAP HANA TDI?

justin_molenaur2
Contributor
0 Kudos

I took a look at this specific view. It has 5 projections, 9 joins, 1 union and 2 different aggregations, so I wouldn't say it's very simple. Obviously SAP intended this to be used over a number of Sales Orders, not just a single or handful chosen via input parameter, and obviously it doesn't scale.

Also of note - these are all set to execute in the SQL Engine, I wonder if switching to blank (so that it executes in the Calc engine) has any effect. Why don't you try this out?

Otherwise, I would think about implementing the functionality you need in an Analytic View if you can pull it off.

Regards,

Justin

former_member194613
Active Contributor
0 Kudos

I did not read all postings:

However, back to the original question: Hana Live views are build exclusivley in graphical calculation views, even scripted views are exceptions, analytical and attribute views are not allowed.

justin_molenaur2
Contributor
0 Kudos

Siegfried - as it has been discussed a few times here and also outside this thread, can you elaborate as to why this approach was taken? Every 'best practices' document has always indicated to start with the most basic HANA modeling artifacts when possible and then move towards calculation views when functionality cannot be achieved.

Regards,

Justin

Former Member
0 Kudos

Thanks everyone for great discussion.

Now having used HANA live for couple of months and several enhancements, I find it extremely annoying to have everything as calculation view especially from classic BW master data perspective.

  1. Modeling texts and master data both gets too complicated. With classic BW, we can add attributes and they show up easily on all queries. In HANA live, we if add any attributes, we need to specifically promote them to individual views and that doesn't seem very "re-usable".
  2. In some projections and aggregations, we completely lose ECC fields tech names. So in lower level views (COM structures - e.g. VBAK_COM), we may have fields with tech names, but in higher level we lose the tech names. Managing everything with just descriptions is highly inefficient.
rama_shankar3
Active Contributor
0 Kudos

Abhiijeet,

  Did you review HANA live docs model documentatioin @ http://help.sap.com/hba . Which module (CRM / Core ECC , etc..)  of HANA live are you implementing? Since General Availability was only last month, you might want to get latest documentation links via OSS customer message or your internal SAP account manager.

Hope this helps.

Regards,

Rama

Former Member
0 Kudos

@Rama - HANA Live for ECC GA was in Dec 2012. They are in SP 03 right now released in aug 2013. We have reviewed all necessary documents on /hba site.

@Ravindra - Great reasoning on performance perspective. But its a big thumps down from reusability perspective especially for master data. In classic BW scenario, you know once we set up 0MATERIAL, we can assume all the attributes on all transactional data. But here, we have to add individual attributes or many cases, complete join to Material calculation view. To me this is going backwards.

Also - HANA Live views not being attribute views, if we are doing mixed scenarios, we cannot use those views/tables, to have virtual master data in BW.

former_member184768
Active Contributor
0 Kudos

Hi Abhiheet,

Last year during my discussions with the HANA development team, it was mentioned that in the options of generating HANA information models based on DSOs / InfoCubes, the Analytic views / Calc views will not contain any Attribute views due to the complexity in maintaining the re-usability aspect. Currently all the imported BW objects based models contains ALL tables for InfoObjects (P tables and T Tables) in the Analytic views. If the infoObject is used in multiple DSOs, then the same P and T tables are repeated in each of the generated Analytic views. I think the concept might have been used in the HANA Live content.

But that was last year, this year it has been informed that with the new SP07 of HANA and BW 7.4, there would be attribute views created for the InfoObjects. Not yet sure if those attribute views will be used in the generated Analytic views, but at least the first step is expected to be taken to generate Attribute views.

I'd suggest you discuss with SAP on similar lines and check if the possibility of having attribute views is likely in next releases / SP07.

Regards,

Ravi

Former Member
0 Kudos

Thanks Ravi for insight. I think that makes sense. We are on 7.4 and SPS06..Will check with SAP regarding future direction and if upgrade of HANA live content will be applicable.

My problem is - if I start exposing lot of missing master data pieces into calculation views for transactions, then my upgrade possibility will be diminished.

former_member184768
Active Contributor
0 Kudos

Completely agree with you. That's why it makes sense to request SAP to provide more clarification and provide the reusability in the HANA models.

As of today, we avoid creating attribute views in HANA based on BW InfoObjects. Let's see how SAP will provide it in BW 7.4.

Regards,

Ravi

Former Member
0 Kudos

Hi Ravindra, just one quick note on your comment -

"Attribute views are good from the perspective of maintenance and development standards / reusability, but from the performance perspective, when the attribute views are used in Analytic views and Calc views, they are resolved to table joins. Hence it doesn't matter if the HANA views use Attribute views or tables in the view definitions."

Unfortunately this is not the case (although I wish it were). We recently came across awful performance problems in a very simple analytic view. We moved tables into the Data Foundation rather than as Attribute Views - and performance improved significantly. This is, of course, contrary to modeling "best practice".

One interesting exercise is to create the world's simplest/smallest star schema with one fact table and one dimension table, and model in the following ways:

1) Analytic View with Attribute VIew

2) Analytic View with dimension table in Data Foundation

3) Calculation View (with base tables or An/At Views)

4) Calculation View, executed in SQL Engine (option in Properties pane)

5) SQL

What's interesting is that every single approach generates a different visualization plan for respective queries - except for options 4 and 5 (which shows that the SQL Engine option for graphical Calc Views does indeed work).

It appears there is more than one way to skin a cat...

Cheers,

Jody

former_member184768
Active Contributor
0 Kudos

Hi Jody,

Thanks for the insight. It is quite helpful and I agree with your observation. Recently even we experienced performance improvement when the tables were moved to the data foundation of the Analytic view.

My comment was based on the interpretation from an answer by Lars in the discussion

If my Analytic View foundation is joined to attribute views, is both the OLAP and JOIN Engine used?

A2: Nope - during activation of the analytic views, the joins in the attribute views get 'flattened' and included in the analytic view run time object. Only the OLAP engine will be used then.

So I interpreted it as, it shouldn't matter if the table is included as Attribute view or as part of the data foundation. It the joins are flattened and included in the Analytic view runtime object, then there should not be any impact on the performance.

Thanks for your comment.

Regards,

Ravi

justin_molenaur2
Contributor
0 Kudos

Just to add some more information here regarding testing out multiple ways of implementing a model. This all came about when we realized that all HANA Live is delivered completely in calculation views as the OP had brought up. I have yet to hear a good reason why, and we constantly hear that 'best practice' is to always use an analytic view if possible. So it was surprising that even simple HANA Live views are built as calculation views.

when the attribute views are used in Analytic views and Calc views, they are resolved to table joins. Hence it doesn't matter if the HANA views use Attribute views or tables in the view definitions.

I recently set up a similarly structured test geared towards performance of an simple Analytic View (4 dimensions) vs. the exact same functionality but implemented in a calculation view as HANA Live would deliver it (series of joins).

My process was to issue queries against these views; using the fact table only, then using 2 fields from one dimension, then two dimensions, then three dimensions and so on. What I found is that as the number of joins increased, the performance in the calculation view deteriorated more sharply than the Analytic view. In fact, the Analytic view performance remained almost constant - even with more joins, while the calc view saw between 2x - 4x performance degradation.

Just thought I would share findings in support of Jody's comment that there is more than one way to do anything in HANA.

Thanks,

Justin

Former Member
0 Kudos

Interesting, thanks for sharing Ravi. Will see if I can't get Lars to chime in here...

Jonathan_Haun
Participant
0 Kudos

Generating the Attribute Views will be helpful for the Auto generated BusinessObjects HANA based Universe. Starting in 4.1, the IDT will examine the underlying SAP HANA model when it designs the IDT Business Layer. It uses the Attribute View in HANA to determine which objects are reusable and which objects are unique. In short, it tries to avoid proliferating the attribute columns in a Universe that is based on multiple information views.

If importing the BW content resulted in the creation of Attribute Views, Analytic Views and Calculation Views, the Universe design process could be greatly simplified.

Jonathan_Haun
Participant
0 Kudos

Do you know the version of SAP HANA that was used in your investigation? I have seen the optimizer make bad decisions in some SPS4 and SPS5 versions of HANA. I performed a similar test on version 62 and found Zero difference between the plans when using an Attribute View or Join in the Data Foundation. The OLAP engine was used to process both information views exactly the same.

lbreddemann
Active Contributor
0 Kudos

Hey Jody,

clearly, the modelling "best practice" aims at a imaginary common/average use case.

A star schema with just one dimension table is an edge case.

The very problem of the star query (applying filters applied to multiple joined tables at once to the one big table where aggregation takes place) is just not present in this scenario.

The SAP HANA optimizers try to be clever about edge cases (ever notes how developers often have the tendency to focus on edge cases?) and will eventually decide to process the query in a different way.

Specifically for the OLAP engine, there are e.g. operations (POPs) that can sometimes be used to combine the work of multiple POPs (e.g. POP1 + POP3 can sometimes we replaced with POP13).

On top of that (you know what's coming now): SAP HANA is still a very fast moving target.

Things do change a lot and often.

And if you're dealing with very specific scenarios, 'suddenly' optimizations could show up in the SAP HANA code that help  these specific scenarios.

Should you come across a situation where it's rather obvious how to process the query best, but SAP HANA does it in a slower way: that would be the case for a support message in my eyes .

Cheers, Lars

Former Member
0 Kudos

Lars Breddemann wrote:

Hey Jody,

clearly, the modelling "best practice" aims at a imaginary common/average use case.

A star schema with just one dimension table is an edge case.

The very problem of the star query (applying filters applied to multiple joined tables at once to the one big table where aggregation takes place) is just not present in this scenario.

Thanks Lars. As you know, I always try to reduce weird behavior to the simplest reproducible situation possible. The actual scenario that we're working with involves 4 attribute views with multiple filters. Modeling in DF rather than AT views also gives different VizPlan results and much better performance. So, the 'edge case' above is rather a 'simple case' of the same observation.

The SAP HANA optimizers try to be clever about edge cases (ever notes how developers often have the tendency to focus on edge cases?) yes, as former developer, edge cases are the ones that get you   and will eventually decide to process the query in a different way.

Specifically for the OLAP engine, there are e.g. operations (POPs) that can sometimes be used to combine the work of multiple POPs (e.g. POP1 + POP3 can sometimes we replaced with POP13). Interesting, didn't know that. Thanks!

On top of that (you know what's coming now): SAP HANA is still a very fast moving target.

Things do change a lot and often.

And if you're dealing with very specific scenarios, 'suddenly' optimizations could show up in the SAP HANA code that help  these specific scenarios.

Should you come across a situation where it's rather obvious how to process the query best, but SAP HANA does it in a slower way: that would be the case for a support message in my eyes . Agreed. For now we're on Rev 61, and I know the first thing support will say - upgrade and try again. We're supposed to upgrade to 67 any day now, so I'll re-test then and potentially open a support message. If I learn anything valuable, I'll update this post.

Cheers, Lars

Former Member
0 Kudos

Interesting, thanks for sharing Jonathan. We may just be one revision behind. We're on Revision 61. We're supposed to upgrade to 67 any day now, at which point I'll re-test and post any interesting findings here.

Cheers

Jody

Former Member
0 Kudos

Hi all,

We did the same test and here are our findings.

Having Calculated fields in Attribute view causes performance degradation when used in query. When you have calculated fields in AT and run the Visual plan you will get OLAP and CALC engine come into play.

Excluding the Calculated fields from the query improved the performance. So we ended up using DS to populate the Table itself with calculated fields rather than creating Calc fields in AT.

Hope fully this Helps everyone.

Regards

Purnaram.k

0 Kudos

This is exactly what we are seeing.  Calculated Attribute views don't perform well with large data sets... especially if you are using an undersized machine.

Jonathan_Haun
Participant
0 Kudos

I generally (When Possible!) try and perform the calculations within SLT (IUUC_REPL_CONTENT) or Data Services (Query Transform) to help with the overall performance. When you avoid using calculated columns in any information view, your view is more likely to stay completely within the OLAP engine for processing. As a side note, if your running suite on SAP HANA (Not side car) your options are more limited.

Former Member
0 Kudos

Under-sized as in CPU?