Skip to Content

RFC call Logging


I have a number of RFC's in the ERP system that are consumed by external system through a middleware component. All of these RFC's are performing as expected/

The primary call is for a stock check from our web site. This RFC gets called around 4 million times per 24 hour period.

I would like to log every call to these RFC's (Start time, result and end time) to aid with error tracking and load analysis.

I have looked at the various system options, ST03 for example, and although this gives you the number of calls in a day, it does not go down to the level required (each individual call)

I have considered using a DB table but (with just a two week retention) this would end up with 168,000,000 entries (4 million calls, 3 states, 14 days)

Can anyone thing of a way (or have already done this) to log every call?



Add comment
10|10000 characters needed characters exceeded

  • Former Member

    Hello James,

    I don't have an answer to your question, but thought maybe you could help me out a bit. I have a kind of similar requirement where an RFC Might get called a couple of million times in a day in a 24 hour period on specific days of a year from an external application through middle ware ( TIBCO ).

    Are there any special things you have taken care of to make this happen? Or is this a sustainable solution so to speak? I am a bit afraid that system might just crash if there are so many calls happening per day. At this moment the service gets called around 2000-5000 times per day.


    Suman Biswas

  • Hi Suman,

    I hope you are well.

    The particular service I am referring to in my question gets called around 4 million times per day (Monday - Friday) in our system with no impact.

    It's a similar set up to what you describe; external consumer coming through a middleware component. The middleware basically acts as a Rest to RFC translation.

    Basically, you need to discuss with your basis team on peak time usage. You need to make sure you have enough connections available for both the RFC and your users/batch/update processes. If our own usage increases, we are considering adding dedicated app servers for the service and using log on groups to ensure only the service accesses those servers.

    I hope that helps?


  • Get RSS Feed

4 Answers

  • Jan 27, 2017 at 10:41 PM

    I am implementing a similar system soon dealing with a third party system doing the RFC calls - but my volumes are going to be considerably lower. Here are some salient points...

    • All the RFCs are custom.
    • Have a custom class to handle entry into the log table (managing the keys, timestamps, user-ids, RFC source, etc)
    • Every RFC function call the class once (or more times depending on number of parameters)
    • For tables and structures, I log the structure name in one field and then the entire contents on the structure in a single data field
    • Some key RFCs also get logging on the way out - with the export parameters
    • I have a table controlling default logging levels for the entire organization unit (on a per plant or warehouse basis) - Debug, Success, Warning or only Errors. My default is D for the Dev and QA systems and E for Production. (I log much more than the RFCs making the D very useful.
    • There is a Parameter ID (PID) that lets me turn the logging to a different level on a per-user, temporary basis
    • I have written a custom front end, that can parse the structures on a field by field basis
    • The front end takes care of purging the log table the first time it is used every day. This avoids a background job - but with your volumes, you may want that job running.

    If you have any specific questions, feel free, I will try my best to give you more information.

    Add comment
    10|10000 characters needed characters exceeded

    • Yes, it is a transparent table. The majority of the times, you are writing and not reading records from this table. You should not see a performance penalty because of this. Consider your BSEG and MSEG tables which are the two of the backbones of the system. They can easily have 10's of millions of records with little performance penalty while writing.

      If you want to consider a more aggressive purging, I do have an idea. Since most often you do get your reports of an issue much sooner (?assumption here) and you want the 14 day period so you will have a few days to research the issue - you could have a second table that you could have a "extractor" program to move out select filtered data off for manual research over a period of days. This extraction can be by time frame / RFC function module / source IP of RFC call / an application code that you may have, etc. Stating the obvious, your key design is going to be critical here.

  • Jan 27, 2017 at 03:53 PM

    ST01 can trace RFC calls. Would that help? If there's too many calls - perhaps extract daily into a data warehouse?

    Add comment
    10|10000 characters needed characters exceeded

  • Feb 13, 2017 at 08:00 PM

    I'm a bit confused by the question. At first, it says "All of these RFC's are performing as expected" but then why any logging would be needed if everything is OK? And in the comments plot thickens with addition of some "Correlation ID". You might want to share the whole story to get better replies.

    If you truly require such logging then I'm afraid that other than a transparent table you have no options. "File system" would open the whole other set of issues and it's not in any way faster or simpler to use.

    Another question - how would you analyze 168 mil. records and whether you truly need the logs even for 14 days. Maybe instead you could do like ST01/05 that need to be switched on and off and only create detailed logs for, say, specific peak periods? And dump/delete the data daily if storage is a problem. But again, I'm not really sure what exactly the logs are for and what you're expecting to find.

    As a side note, high volume interfaces really need to have some serious people involved starting from the design phase to avoid such situations.

    Add comment
    10|10000 characters needed characters exceeded

    • Thanks for clarification. Before you go any further, I'd suggest to discuss this with a Basis admin / Architect or some intelligent person in charge of keeping the SAP lights up. As far as simple performance analysis, the already mentioned traces should provide enough information. If memory serves, at minimum it'll tell you when connection happened, from where and for how long. With the same person you could also discuss the best way of of having a detailed log stored.

      I understand what you're saying and such monitoring indeed can be catch-22 - by running detailed logs 24/7 you'd affect the performance of said services but at the same time if you don't then you have no idea what happened if there is an issue. It seems that you are on the right path aiming for reducing the logged data. After all, we don't need tracing when everything works fine, it's the exceptions that we're after.

      Good luck!

  • Feb 13, 2017 at 10:16 PM


    Some options

    1. Configure security audit log for SM19/SM20n for RFC calls to see which users are logging in, etc. You can read up on security audit log (couple of blogs that show how to breakdown the log files to analyse) -

    2. Look at transaction RSRFCTRC or S_ALR_87101279 for RFC Trace Logs

    Have a look at for RFC Call Logging. It's an older version for netweaver stack but you can search latest for your version.

    Although security centric, this is a good wiki for RFC including information about the transaction STRFCTRACE

    ST01 would be a bit annoying to have switched on for long durations (someone is more than likely to come along and want to run ST05, ST01 or STAUTHTRACE and kill your recording to change filters).



    Add comment
    10|10000 characters needed characters exceeded