cancel
Showing results for 
Search instead for 
Did you mean: 

split tables for DMO

Former Member
0 Kudos

I am preparing to use DMO to upgrade/convert  an Oracle/Unix based BW7.1 to Bw7.4 at Hana/SUSE.

Some tables are very big so that I plan to split them.

However, the DMO document does not show the step where the table-split can be configured.

Could you please share you experience on how to do this?

Thanks!!

Accepted Solutions (1)

Accepted Solutions (1)

Boris_Rubarth
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Christina,

indeed you do not have to care about table splitting when using DMO, as the SUM tool automatically splits large tables.

Regards, Boris

[Product Management SUM, SAP AG]

Former Member
0 Kudos

Hi, Boris,

We have many large tables, all greater than 100GB, but DMO only choose 3 large and 9 small tables to split. this leads to we wait for large table running at the end phase.

Could you share us what the logic is for dmo to split table ?

3 ETQ399 Looking for tables to be split:                                                                       

4 ETQ399 /BIC/AZI_BLS0100               size   358470/  358470 MB split with segment size 0.500000             

4 ETQ399 /BIC/B0004785000               size   349106/  349106 MB split with segment size 0.500000             

4 ETQ399 /BIC/AZFIGLO0200               size   300177/  300177 MB split with segment size 0.500000             

4 ETQ399 RSBATCHDATA                    size    50614/   25307 MB split with segment size 0.267057 (has 1 blobs)

4 ETQ399 RSZWOBJ                        size     9406/    4703 MB split with segment size 0.500000 (has 1 blobs)

4 ETQ399 ARFCSDATA                      size    25138/   25138 MB split with segment size 0.268852             

4 ETQ399 RSBMREQ_DTP                    size    25147/   25147 MB split with segment size 0.268756             

4 ETQ399 RSBMONMESS_DTP                 size    22417/   22417 MB split with segment size 0.301485             

4 ETQ399 RSBMLOGPAR_DTP                 size    14525/   14525 MB split with segment size 0.465294             

4 ETQ399 RSODSACTDATA                   size     7244/    3622 MB split with segment size 0.500000 (has 1 blobs)

4 ETQ399 RSSELDONE                      size     6798/    6798 MB split with segment size 0.500000             

4 ETQ399 BDLDATCOL                      size     9882/    4941 MB split with segment size 0.500000 (has 1 blobs)

3 ETQ399 Identified 12 large tables out of 52282 entries.            

ankit_gupta32
Explorer
0 Kudos

Hello,

When DMO finishes all the phases, it generates UPGANA.xml and MIGRATE_DT_RUN.LST which can be provided again for the next DMO run by placing them in download directory. This time, it should detect the runtime and should split in a better way. This is as per the theory.

In one of our system where we tried DMO, there was no split for a big table (lets say 200 million rows). We used these xml files, DMO split the table in 5 portions in 2nd run. New set of xml files were generated. We reused these xml for the 3rd run of DMO but split factor was reduced to 4 which is strange. I was expecting to have more or same no. of splits atleast.

@Boris : Let me know if you can share some information in this regard.

Regards,

Ankit

0 Kudos

Hello Ankit,

Did you compiled more information about the behavior of the algorithm for table splitting used for DMO?

I had similar situation with different decisions and the only variable appears to be the quantity of R3load processes that we informed at configuration phase.

The worst scenario was an attempt of migration for the production instance: DMO decided not split the biggest table at the Oracle side (table CRMORDERCONT that contains a huge volume of attachments saved as LOB segment). We have to abort the migration.


Now we´re using the benchmarking tool for many simulations trying to find the better distribution and a regular behavior. We experienced using manual table splitting with EUCLONEDEFS_ADD.LST file to forcing CRMORDERCONT processing on new simulations. The line "CRMORDERCONT split segmentsize=0.01" produces 100 slices = 1/x.

We detected another fatal point of failure: If some pipe was broken for some of the biggest tables the table will be entirely reprocessed once the pipe method did not have a unique commit/rollback for the processed slice. The pipe was broken with no overload situation on server so appears to be a communication issue and it is very difficult to diagnosing. Unfortunately this issue also threats the maintenance window. Also if the migration aborts repeatedly the migration log files will be overwrited and future analysis and optimizations are affected (for example the MIGRATION_DT_DUR.XML file has missing entries).

Best regards,

Rodrigo Aoki

ankit_gupta32
Explorer
0 Kudos

Hello Rodrigo,


When you specify R3load values in configuration phase, you can keep it higher no. so that splitting factor is calculated considering high R3load availability (For e.g 100 or 200 ). This will ensure more no. of splits and packages in the bucket. When you are about to enter the downtime phase, you can start with lower R3load value (let's say 20 or 30 depending on available CPU) and keep on increasing the R3load value dynamically as per the CPU usage and capacity.


This is also important that oracle statistics are calculated before you start DMO.


This is very tricky that even if one failure occurs for broken pipe, complete table is reimported and also you can't use XML files for next iteration. So far, DMO is looking for perfectly fine last run.

If the broken pipe issue is coming very often, it is worth to open an SAP message with specific system details. There are some timeout parameters which might help or specific HANA version. (For .e.g idle connection timeout parameter in HANA can be kept as 0 for migration purposes).



Regards,

Ankit Gupta

0 Kudos

Hello Ankit,

First, thanks for the tip of id_connection_timeout parameter. Certainly it must be considered when using a slow network connection.

But in fact we found the main cause of bad performance. We are phocused on actions and analysis on the source Oracle instance but forgot a simple detail about the target instance. The productive HANA appliance was configured for using 2 pairs of 10 Gbps network cards for replication data and BI communication so remained a simple 1 Gbps card for using of application/ clients despite of 10 Gbps backbone.

Fortunately we have a borrowed machine used for a POC of Hybris with full 10 Gbps cards. Running the benchmarking tool for the same source database targeting this another HANA box the execution time has fallen 3 to 4 times from initial 14 hours to 4 hours. We did not have problems with "broken pipe" with this hardware combination.

The HANA appliance for Development and Quality Assurance instances is not using replication so the 10 Gbps card was free for receiving the migration (massive) data.

So, on DMO migration scenarios the old rules of thumb must be considered: network bottlenecks can affect the performance also for R3load pipe method, once the R3load dedicated for the importing continues to using the network to achieve the appliance.

Best regards,

Rodrigo

Boris_Rubarth
Product and Topic Expert
Product and Topic Expert
0 Kudos

Hi Rodrigo,

looks like another proof for my latest blog , but publication was too late for your project.

Best regards, Boris

0 Kudos

Hi Boris,

It is exactly the point!  Table splitting did not work without the adequate throughput between source and target systems. The usage of pipe on DMO/ R3load processes difficults this diagnosis so a file transfer test can be the key test.

As you mentioned, theoretically the nominal traffic achieves 439 GB/ hour for 1 Gbps cards but for my case (I suppose that depends on network topology) the average thoughput was 25 GB /hour!  With 10 Gpbs card the average increased to 150 GB/ hour.

Best regards,

Rodrigo

Answers (2)

Answers (2)

Former Member
0 Kudos

I one of my projects I experienced that there are constellations where the size calculation during DMO was not adequate for certain objects. We found tables which were seen as 10GB tables even if the table took 70GB in DBA_SEGMENTS in compressed Oracle DB.

In our case we received an updated dbsl which finally lead to an appropriate size calculation for the identified problem objects.

Please refer to SAP note 2328446.

Best regards,

Ansgar

former_member188883
Active Contributor
0 Kudos

Hi Christina,

Please refer to SAP note below on information on table splitting on HANA

1783927 - Prerequisites for table splitting with target SAP HANA database


Also I came across and excellent blog for Migration on SAP HANA.

http://scn.sap.com/community/bw-hana/blog/2013/08/29/sap-bw-powered-by-sap-hana-some-points-to-remem...


Hope this helps.


Regards,

Deepak Kori

Former Member
0 Kudos

Deepak:

Thanks for your reply.

My problem is as follows:

I have done OS/DB migration many times including migrations to SAP hana. However the tool used is SWPM(sapinst) which has a STEP to let me specify the table-splitting.

DMO  does not have the same STEP to do that.

May I

1) run SWPM export before running DMO;

2) once I finish specifying the table-splitting and SWPM generated the *.whr files and the whr.txt file, exit from SWPM;

3) run DMO -- I am NOT sure if DMO will pick up the table-splitting info and split the big tables.

Please help.

Thanks!

former_member188883
Active Contributor
0 Kudos

Hi Christina,

I have no idea on usage of table splitting with DMO. Suggestion here would be to raise an OSS message for the same.

Alternatively you can give it a try using DMO and look out for advanced migration option for optimization.

Regards,

Deepak Kori

Reagan
Advisor
Advisor
0 Kudos

Hello Christina

I doubt whether the table splitting files generated by SWPM will be recognized by the DMO option of the SUM tool.

If you are using the DMO option of the SUM tool to migrate and upgrade the SAP system then I suggest you to read this article to understand how the DMO works.

http://scn.sap.com/community/it-management/alm/software-logistics/blog/2014/03/10/dmo-technical-proc...

According to the above article, once the shadow repository has been created and migrated to the HANA database (using R3load) the source is shutdown and the application data is exported from the source and imported to the target system using R3load simultaneously without creating an export dump.

There is a also a file mode option and for that you may read this article.

http://scn.sap.com/community/it-management/alm/software-logistics/blog/2014/03/20/dmo-comparing-pipe...

As I haven't used the DMO yet I would ask for clarification from on this.

Regards

RB