Skip to Content
author's profile photo Former Member
Former Member

dynamic moving value to internal table

i have following problem-

i have defined an internal table(say itab_master) and one of its field is table name which will store name of some other internal table(say itab2, itab3..etc).

now i want to move data into the tables(itab2, itab3..etc) selected as per row of itab1_master.

kindly suggest me way of doing it.

Add a comment
10|10000 characters needed characters exceeded

Assigned Tags

Related questions

3 Answers

  • Best Answer
    author's profile photo Former Member
    Former Member
    Posted on Feb 21, 2008 at 03:45 AM

    hi rishab,

    Check the below blog, it shows step by step procedure to create dynamic internal tables.

    /people/rich.heilman2/blog/2005/07/27/dynamic-internal-tables-and-structures--abap

    cheers,

    Hema.

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Feb 21, 2008 at 03:54 AM

    Rishab,

    FIrst you declare :

    FIELD-SYMBOLS <fs> TYPE ANY TABLE.

    ASSIGN itab_temp <fs>.

    Note : If itab2 ,itab3..... are having the same structure there is now problem

    Else you have to use FIELD-GROUPS.

    Doc...

    Defining an Extract

    To define an extract, you must first declare the individual records and then define their structure.

    Declaring Extract Records as Field Groups

    An extract dataset consists of a sequence of records. These records may have different structures. All records with the same structure form a record type. You must define each record type of an extract dataset as a field group, using the FIELD-GROUPS statement.

    FIELD-GROUPS <fg>.

    This statement defines a field group <fg>. A field group combines several fields under one name. For clarity, you should declare your field groups at the end of the declaration part of your program.

    A field group does not reserve storage space for the fields, but contains pointers to existing fields. When filling the extract dataset with records, these pointers determine the contents of the stored records.

    You can also define a special field group called HEADER:

    FIELD-GROUPS HEADER.

    This group is automatically placed before any other field groups when you fill the extract. This means that a record of a field group <fg> always contains the fields of the field group HEADER. When sorting the extract dataset, the system uses these fields as the default sort key.

    Defining the Structure of a Field Group

    To define the structure of a record, use the following statement to add the required fields to a field group:

    INSERT <f1>... <f n> INTO <fg>.

    This statement defines the fields of field group <fg>. Before you can assign fields to a field group, you must define the field group <fg> using the FIELD-GROUPS statement. The fields in the field group must be global data objects in the ABAP program. You cannot assign a local data object defined in a procedure to a field group.

    The INSERT statement, just as the FIELD-GROUPS statement, neither reserves storage space nor transfers values. You use the INSERT statement to create pointers to the fields <f i > in the field group <fg>, thus defining the structures of the extract records.

    When you run the program, you can assign fields to a field group up to the point when you use this field group for the first time to fill an extract record. From this point on, the structure of the record is fixed and may no longer be changed. In short, as long as you have not used a field group yet, you can still extend it dynamically.

    The special field group HEADER is part of every extract record. Consequently, you may not change HEADER once you have filled the first extract record.

    A field may occur in several field groups; however, this means unnecessary data redundancy within the extract dataset. You do not need to define the structure of a field group explicitly with INSERT. If the field group HEADER is defined, an undefined field group consists implicitly of the fields in HEADER, otherwise, it is empty.

    NODES: SPFLI, SFLIGHT.

    FIELD-GROUPS: HEADER, FLIGHT_INFO, FLIGHT_DATE.

    INSERT: SPFLI-CARRID SPFLI-CONNID SFLIGHT-FLDATE

    INTO HEADER,

    SPFLI-CITYFROM SPFLI-CITYTO

    INTO FLIGHT_INFO.

    The program is linked to the logical database F1S. The NODES statement declares the corresponding interface work areas.

    There are three field groups. The INSERT statement assigns fields to two of the field groups.

    Filling an Extract with Data

    Once you have declared the possible record types as field groups and defined their structure, you can fill the extract dataset using the following statements:

    EXTRACT <fg>.

    When the first EXTRACT statement occurs in a program, the system creates the extract dataset and adds the first extract record to it. In each subsequent EXTRACT statement, the new extract record is added to the dataset.

    Each extract record contains exactly those fields that are contained in the field group <fg>, plus the fields of the field group HEADER (if one exists). The fields from HEADER occur as a sort key at the beginning of the record. If you do not explicitly specify a field group <fg>, the

    EXTRACT

    statement is a shortened form of the statement

    EXTRACT HEADER.

    When you extract the data, the record is filled with the current values of the corresponding fields.

    As soon as the system has processed the first EXTRACT statement for a field group <fg>, the structure of the corresponding extract record in the extract dataset is fixed. You can no longer insert new fields into the field groups <fg> and HEADER. If you try to modify one of the field groups afterwards and use it in another EXTRACT statement, a runtime error occurs.

    By processing EXTRACT statements several times using different field groups, you fill the extract dataset with records of different length and structure. Since you can modify field groups dynamically up to their first usage in an EXTRACT statement, extract datasets provide the advantage that you need not determine the structure at the beginning of the program.

    Assume the following program is linked to the logical database F1S.

    REPORT demo_extract_extract.

    NODES: spfli, sflight.

    FIELD-GROUPS: header, flight_info, flight_date.

    INSERT: spfli-carrid spfli-connid sflight-fldate

    INTO header,

    spfli-cityfrom spfli-cityto

    INTO flight_info.

    START-OF-SELECTION.

    GET spfli.

    EXTRACT flight_info.

    GET sflight.

    EXTRACT flight_date.

    There are three field groups. The INSERT statement assigns fields to two of the field groups. During the GET events, the system fills the extract dataset with two different record types. The records of the field group FLIGHT_INFO consist of five fields: SPFLI-CARRID, SPFLI-CONNID, SFLIGHT-FLDATE, SPFLI-CITYFROM, and SPFLI-CITYTO. The first three fields belong to the prefixed field group HEADER. The records of the field group FLIGHT_DATE consist only of the three fields of field group HEADER. The following figure shows the structure of the extract dataset:

    Reading an Extract

    Like internal tables, you can read the data in an extract dataset using a loop.

    LOOP.

    ...

    [AT FIRST | AT <fgi> [WITH <fg j>] | AT LAST.

    ...

    ENDAT.]

    ...

    ENDLOOP.

    When the LOOP statement occurs, the system stops creating the extract dataset, and starts a loop through the entries in the dataset. One record from the extract dataset is read in each loop pass. The values of the extracted fields are placed in the corresponding output fields within the loop. You can use several loops one after the other, but they cannot be nested. It is also no longer possible to use further EXTRACT statements within or after the loop. In both cases, a runtime error occurs.

    In contrast to internal tables, extract datasets do not require a special work area or field symbol as an interface. Instead, you can process each record of the dataset within the loop using its original field names.

    Loop control

    If you want to execute some statements for certain records of the dataset only, use the control statements AT and ENDAT.

    The system processes the statement blocks between the control statements for the different options of AT as follows:

    AT FIRST

    The system executes the statement block once for the first record of the dataset.

    AT <fgi> [WITH <fgj>]

    The system processes the statement block, if the record type of the currently read extract record was defined using the field group <fg i >. When using the WITH <fg j > option, in the extract dataset, the currently read record of field group <fg i > must be immediately followed by a record of field group <fg j >.

    AT LAST

    The system executes the statement block once for the last record of the dataset.

    You can also use the AT and ENDAT statements for control level processing.

    Assume the following program is linked to the logical database F1S.

    REPORT DEMO.

    NODES: SPFLI, SFLIGHT.

    FIELD-GROUPS: HEADER, FLIGHT_INFO, FLIGHT_DATE.

    INSERT: SPFLI-CARRID SPFLI-CONNID SFLIGHT-FLDATE

    INTO HEADER,

    SPFLI-CITYFROM SPFLI-CITYTO

    INTO FLIGHT_INFO.

    START-OF-SELECTION.

    GET SPFLI.

    EXTRACT FLIGHT_INFO.

    GET SFLIGHT.

    EXTRACT FLIGHT_DATE.

    END-OF-SELECTION.

    LOOP.

    AT FIRST.

    WRITE / 'Start of LOOP'.

    ULINE.

    ENDAT.

    AT FLIGHT_INFO WITH FLIGHT_DATE.

    WRITE: / 'Info:',

    SPFLI-CARRID, SPFLI-CONNID, SFLIGHT-FLDATE,

    SPFLI-CITYFROM, SPFLI-CITYTO.

    ENDAT.

    AT FLIGHT_DATE.

    WRITE: / 'Date:',

    SPFLI-CARRID, SPFLI-CONNID, SFLIGHT-FLDATE.

    ENDAT.

    AT LAST.

    ULINE.

    WRITE / 'End of LOOP'.

    ENDAT.

    ENDLOOP.

    The extract dataset is created and filled in the same way as shown in the example for Filling an Extract with Data. The data retrieval ends before the END-OF-SELECTION event, in which the dataset is read once using a loop.

    The control statements AT FIRST and AT LAST instruct the system to write one line and one underscore line in the list, once at the beginning of the loop and once at the end.

    The control statement AT <fg i > tells the system to output the fields corresponding to each of the two record types. The WITH FLIGHT_DATE option means that the system only displays the records of field group FLIGHT_INFO if at least one record of field group FLIGHT_DATE follows; that is, if the logical database passed at least one date for a flight.

    The beginning of the output list looks like this:

    The contents of the field SFLIGHT-FLDATE in the HEADER part of record type FLIGHT_INFO are displayed as pound signs (#). This is because the logical database fills all of the fields at that hierarchy level with the value HEX 00 when it finishes processing that level. This feature is important for sorting and for processing control levels in extract datasets.

    Add a comment
    10|10000 characters needed characters exceeded

  • author's profile photo Former Member
    Former Member
    Posted on Feb 21, 2008 at 03:55 AM

    but i dont want to create another dynamic table.

    i have already defined a table and want to use it.

    this is the struct of first table:-

    DATA:BEGIN OF I_OBJTYPE OCCURS 0,

    POINTER TYPE I,

    OBJTYPE1 LIKE E071-OBJECT,

    OBJTYPE2 LIKE E071-OBJECT,

    OBJTYPE3 LIKE E071-OBJECT,

    ->TABLENAME(20),

    END OF I_OBJTYPE.

    i want to use value in field tablename and value to the entry selected in this table.

    I have tried to assign a field symbol to it (tablename field) but have not been able to append an entry to this table(value of this field).

    kindly suggest me way of doing it.

    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.