Skip to Content
avatar image
Former Member

Parallel Processing

Hi All - We have a program in which we have a select statement

thats joins 3 tables and fetches large amount of data. This program is quite frequently run and takes about a couple minutes each time. Just wanted to know if we can use parallel processing for this.

1. Is it possible to use the parallel processing technique for a Select statement?

2. Will fetching the data using Packet size option improve performance?

3. Can Parallel processing be used to programs run in foreground?



Add comment
10|10000 characters needed characters exceeded

  • Get RSS Feed

5 Answers

  • Sep 11, 2008 at 12:31 AM


    what do you mean by parallel processing? It is obvious that your bottleneck is DB. I don't see any ways how to paralize this process. You can try to improve your query performance by optimizing it (maybe create a new index and so on). Or you can try to tune your query on the DB level using DB hints.

    So my quick answers:

    1) I do not think so.

    2) I do not know what do you mean.

    3) You can create sub tasks, run them in background and wait for results from all tasks. There is example in ABAP documentation for this scenario. Now I do not have access to the system.


    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Sep 11, 2008 at 02:49 AM

    Hi Sam,

    Answer to question 1:

    As long as you can select your data in many sub packages (select smaller amount of data), and re-assemble it again together to get the final result that you need, then you can use parallel processing technique to improve the performance of the program.

    Answer to question 2:

    The performance is normally measured by number of data retrieved compared to time used to get them. You can improve the performance by following these 5 principles below:

    2a. Keep the result set small

    2b. Minimize the amount of data transferred

    2c. Minimize the number of data transfers

    2d. Minimize the search overhead

    2e. Reduce the database load

    Good if you use SE30 first to evaluate the cause of the problem, whether it lies with your server or your program. If it's caused by server, then you need basis and your management to enhance the infrastructure 😊

    Answer to question 3:

    Yes, you can run parallel processing in foreground. You can use IDOC parallel program RBDAPP01 as a reference to create similar program.

    In summary, I would recommend you to make sure the bottleneck of the performance first by using SE30. If the problem is with the code, then try to focus in 'select' and 'loop' statement. You can check whether your select statement is effective by using ST05. If you have nested loop, make sure you follow parallel cursor approach as mentioned in SE30 tips and tricks.

    Hope it helps.

    Add comment
    10|10000 characters needed characters exceeded

    • Hi Sam,

      Try replacing the following code with relative variables:-

      DATA: lv_taskNumber TYPE I VALUE 0,              "variable to hold the current 
                lv_total_no_of_tasks TYPE I VALUE 0.
      DATA: BEGIN OF lt_per_content OCCURS 0,
      	pernr TYPE pernr_d,
      	internal_table TYPE TABLE OF workArea,
      	taskNumber TYPE I,
                END OF lt_per_content.
      DATA ls_per_content LIKE LINE OF lt_per_content.
      ***Fill the internal table  lt_per_content  with the pernr list***
      LOOP AT lt_per_content INTO ls_per_content.
            "Associate a unique task number with each pernr
            ADD 1 TO lv_taskNumber.                   
            ls_per_content-taskNumber = lv_taskNumber.
            MODIFY lt_per_content FROM ls_per_content.
            ADD 1 TO lv_total_no_of_tasks.                      "Add to total task list
      ***Calling the function module for processing the data
      ***Also the following code starts the New Threads
               STARTING NEW TASK lv_tasknumber
               PERFORMING task_finished ON END OF TASK    
                    IMPERNR     = ls_per_content-pernr
                    EX_TABLE   = ls_per_content-internal_table.
      WAIT UNTIL lv_total_no_of_tasks EQ 0.
      FORM task_finished USING current_task TYPE any.
           SUBTRACT 1 FROM lv_total_no_of_tasks.
           LOOP AT lt_per_content INTO ls_per_content WHERE taskNumber = current_task.
                               EX_TABLE   = ls_per_content-internal_table.
                MODIFY lt_per_content FROM ls_per_content.

      I Hope this piece of code is of some help to u..



      PS: try substituting the internal_table with the table that u need 😊

  • avatar image
    Former Member
    Sep 11, 2008 at 05:22 AM


    You can use remote function module for this purpose.

    if you call remote function module asynchronously, then its processing will be executed in another work process. So parallel processing will be achieved.

    Add comment
    10|10000 characters needed characters exceeded

  • Sep 11, 2008 at 05:26 AM

    Parallel sessions can be opened CALL FUNCTION ' ' as new task.

    just check the F1 help...

    But i don think this will meet ur req...

    Add comment
    10|10000 characters needed characters exceeded

  • avatar image
    Former Member
    Sep 11, 2008 at 10:04 AM

    as long as you do no update, parallel processing is always possible. you are telling that you use a join-statement and it takes a lot of time because of the amount of data. the join does work with temporary tables so this does not claim any time for access to the database. ABAP has to go through the temporary files and that can take a lot of time. in such a case the join is sometimes not the solution. try normal nested select statements. if this takes longer then the join then the join is the better way and you have to accept the waiting time.

    Best Regards,


    Add comment
    10|10000 characters needed characters exceeded

    • Former Member

      Thank you. Can you just give me an example of ow to acheive this parallel processing for the select statement.