Skip to Content
author's profile photo Former Member
Former Member


Hello SAP Gurus,

I'm new to SAP Testing. I would like to know the STEPS of the User Acceptance Testing (UAT).

I've heard that SAP Testing is done using eCATT. Kindly let me know the details of eCATT. Is the SAP Testing being done by the SAP-Functional Consultant or the Technical Consultants (ABAPers).

Looking for your early feedback.

With regards,


Add a comment
10|10000 characters needed characters exceeded

Related questions

2 Answers

  • Best Answer
    author's profile photo Former Member
    Former Member
    Posted on Aug 11, 2008 at 05:16 AM


    UAT is carried out by testing of development objects by user himself in Quality Server & then getting his approval ( e.g. through official mail etc.) and attaching this mail in your UAT doc.and this UAT doc. is parked in your TP Tracker to move the objects from Development to Production Server which are tested & approved by user to move to Production Server.

    Regarding eCATT pls find the details as below :

    Creating Test Cases with CATT and Extended CATT


    You can create test cases with the Computer Aided Test Tool (CATT) and the Extended Computer-Aided Test Tool (Extended CATT), in the Test Workbench. You can use various test case types, depending on the purpose of the test.

    You define test cases with CATT and Extended CATT. A distinction is made between test cases for acceptance tests and for functional tests.

    Acceptance test cases are always performed manually by testers. They are subjective tests of the usability of business processes, so they are test descriptions that the tester must perform.

    Functional test cases ensure that transaction functions contain no errors. They can run automatically without user dialog and need not be performed by testers. Test cases are usually created by recording the transactions to be tested with CATT or Extended CATT.


    You can process all test cases which you create with CATT and Extended CATT, as follows:

    · manage in test catalogs

    · integrate in test plans

    · put in test packages

    · check status in the status analysis.

    _Setting up eCATT Test Configurations for Use in ST30(Tcode)_


    eCATT test configurations are used as a basis for running performance tests using the Global Performance Analysis (ST30).


    Make sure that the RFC user has the authorizations for transactions ST05 and STAD (profile S_TOOLS_EX for statistical records and profile S_ADMI_FCD to use the SQL Trace) in the systems called from the central test system.


    1. Stop the Easy Access menu in the session in which you create the test configuration. Enter u201C/nu201D once or several times in the relevant session. The Easy Access menu has been stopped successfully if the screen with the Start SAP Easy Access button is displayed.

    2. Record the test scripts to be used within the test configuration with GUI scripting.

    In your local GUI (for example, SAP GUI for Windows), activate Enable Scripting but deactivate both Notify whenu2026 indicators in the Options menu (Alt F12 ® Options ® Scripting tab).

    3. Maintain the test configurations in the central test system.

    Make sure you define the RFC destinations where eCATT test configurations are supposed to run without load balancing (transaction SM59), as the performance figures created in a system that runs on multiple servers are server-specific.

    4. Create an adequate quantity of data to serve as a basis for testing in each system. Create the data using client copy, extra eCATT scripts and/or reports to reset the data to a default.

    5. To help differentiate between relevant and irrelevant measurement data in the statistical data list, use data correcting reports (prefix u201EZ_*u201C) within the eCATT test scripts.

    6. Execute the eCATT test configuration in transaction SECATT to check whether it runs correctly.


    You have set up an eCATT test configuration that can be used as a basis for performance testing in transaction ST30.

    *Running eCATT Test Configurations From Within ST30*

    Setting the Run Parameters:

    1. In transaction ST30, choose the eCATT Test tab.

    2. Enter a log ID in the Log ID field.

    The purpose of a log ID is to group related or similar tests under a common node, so that later on the tests can be found and displayed together in a list. You must specify an existing log ID or create a new one and save it (this requires you to specify a transport order because the log ID entries can be transported to another system). To create a new log ID, choose Edit Log IDs.

    3. Enter a description of the data in the Performance Test field to be able to identify the performance figures in transaction ST33 later.

    For example, you could use the following syntax:

    4. Specify the eCATT test configuration in the Test Configuration field.

    You can also specify an evaluation schema for the performance figures. An evaluation schema defines which of the data records in the in the global performance analysis statistics are used for the evaluation.

    5. For reliable measurement results, proceed as follows:

    a. Enter the number of eCATT preprocesses in the No. of eCATT Preprocs field. Specify at least 5 (recommended) to fill the system buffers.

    eCATT preprocesses precede the main runs from which the performance figures are retrieved. They set the resources used for the run (program buffer, table buffer, and so on) to a well-defined status, before the performance of the subsequent eCATT runs is measured.

    b. Set the No. of eCATT Main Runs to at least 10 (recommended).

    These are the runs whose performance behavior is to be measured. They are executed after the eCATT preprocesses.

    If you activate With Run for SQL Trace, one additional run of the test configuration is executed to create an SQL trace. The purpose of this is to avoid that the SQL trace influences the measurement in the main run.

    Programming Guidelines (Optional):

    If you activate Checklist, the program generates a list of performance checklist results to be used with an external spreadsheet application, such as Microsoft Excel. The creation of the checklist is only possible if SQL Trace Analysis and therefore also With Run for SQL Trace were also set, because some results of the checklist are based on the analysis of the SQL trace.

    SQL Trace Analysis:

    To be able to perform this analysis, an SQL trace must have been created.

    If you activate this indicator, the Code Inspector checks the coding during the performance test. The purpose of this function is to identify inefficient database accesses that may arise from dynamic SQL statements in the program that ran before.

    The reason for this is that the checklist results for "Indexes", "WHERE statements" and "Buffers" are derived from the automatic SQL trace analysis (executed by the Code Inspector).

    You can only activate the SQL trace analysis, if you also choose With Run for SQL Trace, since otherwise there is no SQL trace available to determine the programs involved.

    Static Program Analysis:

    This indicator determines whether the performance checks of the Code Inspector are used with the programs involved during the performance test (eCATT). The purpose of this function is to identify inefficient database accesses in the static coding.

    You can only activate the static program analysis if you also choose With Run for SQL Trace, since otherwise there is no SQL trace available to determine the programs involved.

    Distributed Statistics Data (DSRs) (Optional)

    In the Central System Destination field you can specify the destination of the central monitoring system in the system landscape.

    In the central monitoring system, for example, non-ABAP system components (such as J2EE components) are registered. The statistical data for these system components can only be collected if this destination (usually the central system that is used for monitoring the system landscape) is used. If you do not make a specification, ST30 considers the current system as the central monitoring system.

    The destination entered here is only taken into account if With Distributed Statistical Data was also set. This indicator determines whether additional distributed statistics records (DSRs) are to be collected from the system components (such as J2EE components) known to the central monitoring system (accessible via the specified destination). In this case, all components that are registered in the central monitoring system are checked for statistics data that may have accumulated.

    ● With/Without Transactional Context

    This indicator determines whether the system searches for distributed statistics data (DSRs) without the transaction context of the business process executed.

    ● Additional Statistics Data

    You define here whether, in addition to the DSRs automatically collected by the specified central monitoring system, additional statistics data is to be collected from destinations entered manually on the Manual Test 1 or Manual Test 2 tabs before the test run. For more details, see the F1 Help.

    Data Comparison

    In this group box you can perform comparisons between the performance figures of two different tests. Enter the names of the performance tests to be compared and specify the values as required. For a description of each field, see the F1 Help.

    Test Control:

    Choose eCATT Test Only to start the measurement runs. The system executes the test runs in online mode.


    After the runs have completed, check for possible error messages on the Automatic Test: Logb tab.

    If the log identifies error messages that occurred during the run of a test configuration itself:

    1. Start transaction SECATT.

    2. Choose Goto ® Logs.

    3. Specify the procedure number of the eCATT log in the Current Procedure No. field. You can obtain the procedure number on the Autom. Test: Logb tab in ST30.

    4. Press Enter to call the log.

    For details go through the following link :

    Hope this helps.



    Edited by: Tejas Pujara on Aug 11, 2008 7:21 AM

    Add a comment
    10|10000 characters needed characters exceeded

  • Posted on Aug 11, 2008 at 05:38 AM


    Please refer this link,




    Add a comment
    10|10000 characters needed characters exceeded

Before answering

You should only submit an answer when you are proposing a solution to the poster's problem. If you want the poster to clarify the question or provide more information, please leave a comment instead, requesting additional details. When answering, please include specifics, such as step-by-step instructions, context for the solution, and links to useful resources. Also, please make sure that you answer complies with our Rules of Engagement.
You must be Logged in to submit an answer.

Up to 10 attachments (including images) can be used with a maximum of 1.0 MB each and 10.5 MB total.