cancel
Showing results for 
Search instead for 
Did you mean: 

RFC call Logging

macds
Explorer
0 Kudos

Hi,

I have a number of RFC's in the ERP system that are consumed by external system through a middleware component. All of these RFC's are performing as expected/

The primary call is for a stock check from our web site. This RFC gets called around 4 million times per 24 hour period.

I would like to log every call to these RFC's (Start time, result and end time) to aid with error tracking and load analysis.

I have looked at the various system options, ST03 for example, and although this gives you the number of calls in a day, it does not go down to the level required (each individual call)

I have considered using a DB table but (with just a two week retention) this would end up with 168,000,000 entries (4 million calls, 3 states, 14 days)

Can anyone thing of a way (or have already done this) to log every call?

Thanks

James

former_member185489
Participant
0 Kudos

Hello James,

I don't have an answer to your question, but thought maybe you could help me out a bit. I have a kind of similar requirement where an RFC Might get called a couple of million times in a day in a 24 hour period on specific days of a year from an external application through middle ware ( TIBCO ).

Are there any special things you have taken care of to make this happen? Or is this a sustainable solution so to speak? I am a bit afraid that system might just crash if there are so many calls happening per day. At this moment the service gets called around 2000-5000 times per day.

Regards,

Suman Biswas

macds
Explorer
0 Kudos

Hi Suman,

I hope you are well.

The particular service I am referring to in my question gets called around 4 million times per day (Monday - Friday) in our system with no impact.

It's a similar set up to what you describe; external consumer coming through a middleware component. The middleware basically acts as a Rest to RFC translation.

Basically, you need to discuss with your basis team on peak time usage. You need to make sure you have enough connections available for both the RFC and your users/batch/update processes. If our own usage increases, we are considering adding dedicated app servers for the service and using log on groups to ensure only the service accesses those servers.

I hope that helps?

James

Accepted Solutions (1)

Accepted Solutions (1)

Colleen
Advisor
Advisor

Hi

Some options

1. Configure security audit log for SM19/SM20n for RFC calls to see which users are logging in, etc. You can read up on security audit log (couple of blogs that show how to breakdown the log files to analyse) - https://blogs.sap.com/2014/12/11/analysis-and-recommended-settings-of-the-security-audit-log-sm19-sm...

2. Look at transaction RSRFCTRC or S_ALR_87101279 for RFC Trace Logs

Have a look at https://help.sap.com/saphelp_nw70ehp2/helpdata/en/34/9f3b2fda3b184cb2b7179d0fa30eec/content.htm for RFC Call Logging. It's an older version for netweaver stack but you can search latest help.sap.com for your version.

Although security centric, this is a good wiki for RFC https://wiki.scn.sap.com/wiki/display/Security/Best+Practice+-+How+to+analyze+and+secure+RFC+connect... including information about the transaction STRFCTRACE https://blogs.sap.com/2010/12/05/how-to-get-rfc-call-traces-to-build-authorizations-for-srfc-for-fre...

ST01 would be a bit annoying to have switched on for long durations (someone is more than likely to come along and want to run ST05, ST01 or STAUTHTRACE and kill your recording to change filters).

Regards

Colleen

Answers (3)

Answers (3)

raghug
Active Contributor

I am implementing a similar system soon dealing with a third party system doing the RFC calls - but my volumes are going to be considerably lower. Here are some salient points...

  • All the RFCs are custom.
  • Have a custom class to handle entry into the log table (managing the keys, timestamps, user-ids, RFC source, etc)
  • Every RFC function call the class once (or more times depending on number of parameters)
  • For tables and structures, I log the structure name in one field and then the entire contents on the structure in a single data field
  • Some key RFCs also get logging on the way out - with the export parameters
  • I have a table controlling default logging levels for the entire organization unit (on a per plant or warehouse basis) - Debug, Success, Warning or only Errors. My default is D for the Dev and QA systems and E for Production. (I log much more than the RFCs making the D very useful.
  • There is a Parameter ID (PID) that lets me turn the logging to a different level on a per-user, temporary basis
  • I have written a custom front end, that can parse the structures on a field by field basis
  • The front end takes care of purging the log table the first time it is used every day. This avoids a background job - but with your volumes, you may want that job running.

If you have any specific questions, feel free, I will try my best to give you more information.

macds
Explorer
0 Kudos

Hi Raghu,

I assume you are using a transparent table for this?

The detail you have provided is similar to my original thing. Again, my concern is the amount of updates I will be doing to this table and the impact on service performance.

raghug
Active Contributor
0 Kudos

Yes, it is a transparent table. The majority of the times, you are writing and not reading records from this table. You should not see a performance penalty because of this. Consider your BSEG and MSEG tables which are the two of the backbones of the system. They can easily have 10's of millions of records with little performance penalty while writing.

If you want to consider a more aggressive purging, I do have an idea. Since most often you do get your reports of an issue much sooner (?assumption here) and you want the 14 day period so you will have a few days to research the issue - you could have a second table that you could have a "extractor" program to move out select filtered data off for manual research over a period of days. This extraction can be by time frame / RFC function module / source IP of RFC call / an application code that you may have, etc. Stating the obvious, your key design is going to be critical here.

Jelena
Active Contributor

I'm a bit confused by the question. At first, it says "All of these RFC's are performing as expected" but then why any logging would be needed if everything is OK? And in the comments plot thickens with addition of some "Correlation ID". You might want to share the whole story to get better replies.

If you truly require such logging then I'm afraid that other than a transparent table you have no options. "File system" would open the whole other set of issues and it's not in any way faster or simpler to use.

Another question - how would you analyze 168 mil. records and whether you truly need the logs even for 14 days. Maybe instead you could do like ST01/05 that need to be switched on and off and only create detailed logs for, say, specific peak periods? And dump/delete the data daily if storage is a problem. But again, I'm not really sure what exactly the logs are for and what you're expecting to find.

As a side note, high volume interfaces really need to have some serious people involved starting from the design phase to avoid such situations.

raghug
Active Contributor

In my case, we have two third parties involved on opposing sides of the interface. So it was a pre-emptive method of being able to get out of the good old finger-pointing-go-nowhere-standoff. It also helps in the Dev to debug. In production I have it turned down to log only error messages. I am curious about the OPs situation too.

macds
Explorer
0 Kudos

Hi Jelena,

Thanks for your reply.

Currently I have no real way of measuring peak times of the services (by service), Average runtime, those services that exit due to errors (invalid article etc) and those that complete successfully. This kind of insight is valuable.

The 14 days was aspirational, after some consideration, I think I only need to keep logs for a couple of days.

An earlier comment from Raghu has moved me to more of a logging framework with the ability to switch between; no logging, errors, runtimes, debug etc. Most cases I would only be looking for errors and runtime thus reducing the amount of data.

The correlation ID is for tracking service flow through each layer of a service call. For instance, our website might call for stock position. It would call it's own application service, that calls through an ESB and on to ERP. Each stock check request would have a unique Correlation ID and each layer would log using this ID. You can then analyse the journey through each layer to get the complete picture. This can aid with performance checking (where is the bottle neck etc.) and remove the finger pointing (See Raghu comment below)

As said, the current RFC's are working OK but we do get questions we can't answer (eg. did you see a performance issue with service X at 15:00 yesterday). With the logs we can check the average runtime etc. We are also building more services everyday and I think it's the right thing to do to have a proper mechanism for logging and analysing. And most of the SAP standard offerings are only turned on after the horse has bolted; Start tracing now even though the issue was an hour ago.

I hope that all makes sense?

Jelena
Active Contributor
0 Kudos

Thanks for clarification. Before you go any further, I'd suggest to discuss this with a Basis admin / Architect or some intelligent person in charge of keeping the SAP lights up. As far as simple performance analysis, the already mentioned traces should provide enough information. If memory serves, at minimum it'll tell you when connection happened, from where and for how long. With the same person you could also discuss the best way of of having a detailed log stored.

I understand what you're saying and such monitoring indeed can be catch-22 - by running detailed logs 24/7 you'd affect the performance of said services but at the same time if you don't then you have no idea what happened if there is an issue. It seems that you are on the right path aiming for reducing the logged data. After all, we don't need tracing when everything works fine, it's the exceptions that we're after.

Good luck!

matt
Active Contributor

ST01 can trace RFC calls. Would that help? If there's too many calls - perhaps extract daily into a data warehouse?

Sandra_Rossi
Active Contributor
0 Kudos

Even ST05 (I'm not sure whether ST01/ST05 RFC trace look like the same; I remember that for one trace ST05 was more sexy than ST01)

macds
Explorer
0 Kudos

With ST01 and ST05 traces, both are on demand type requests (you turn them on and off as you require). Therefore, they are not really designed to run constantly logging millions of requests per day.

I don't think I can use any kind of standard "out of the box" trace due to one specific requirement; Correlation ID. We plan to have this ID passed from consumer through the each layer used (Consumer layer, ESB, SAP etc). Each layer will log this Unique ID. We will then be easily able to check the full end to end process for any request.

Therefore, I think I need some kind of bespoke logging either to DB table or to file system or somewhere else.

Keeps coming back to the 4 million requests per day issue.

kiran_k8
Active Contributor
0 Kudos

James,

Did you explored if ST12 has got anything for your requirement.

K.Kiran.