10-29-2019 8:25 PM - edited 02-04-2024 6:07 AM
I'm beginning an investigation of the SAP Cash Application Machine Learning capability. I have read through the implementation guidance and feel comfortable with how to enable the connections.
I am interested in recommendations on testing the SAP Cash Application function within one of our test landscapes instead of a productive landscape. My thought is to use a recent copy of the productive system, use this copy for the 'training', then use a recent unprocessed bank statement (or copy and upload the next bank statement) for testing.
Any thoughts or other guidance?
There are two ways to test Cash Application.
1. Benchmark PoC - train the ML model and analyze the Accuracy and Proposal Ratio to identify which company code ( in which country ) to be started at first.
2. Inference PoC - To validate to check the proposed result by Cash App
The 2nd PoC normally came after 1st Benchmark PoC.
Both can connect with SAP CF with S/4 copied from Prod system.
1. The Benchmark PoC is the 3 requirements below:
- BS and invoice history data more than 10,000 cleared for customer per a company code:
- up to 100,000 incoming payment data per a year.
- use t-cd FEBA/FEBAN (FEB_BSPROC) for post processing after BS import.
2. Before you do the 2nd PoC, you need to keep your Electric Bank Statement(EBS) files for the testing period. (such like from Jan 01 to Jan 07, 2019) That EBS to be sent with open invoices (SLA) to Cash Application and proposed the candidate.
Kind regards,
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Dear Yasuko,
We are considering using the Cash Application for the production system. (SAP S/4HANA Cloud ES)
These requirements are also needed for use CA in production system ?
- BS and invoice history data more than 10,000 cleared for a customer per company code:
- up to 100,000 incoming payment data per year. Regards, Mino Kato
In this example 01, they did PoC with 1,028 cleared data.
https://www.sap.com/japan/documents/2018/12/da07ce55-317d-0010-87a3-c30de2ffd8ff.html
Regards,
Mino Kato
Hello Mini Kato,
The numbers mentioned are best practices numbers. Any less also would also work. This is machine learning and more the data the better the model would be trained. And hence the recommendation.
Regards, Ananth
Hi Ananth
Thank you!
Mino Kato
Hi Mino,
It seems 1,028 is too small data for benchmark in my experience.
The benchmark runs with two block of data, model training and validation.
If the model training data : validation is 9:1, around 900 is used for model creation and the rest 100 would be used for model validation.
If the training data is not enough, the characteristics influenced to clear the data is not effective enough. The model is built with very generic attributes. Even if the data augmentation feature is implemented, it might not be the accurate result, I think. In this point, I would ask comment from data scientist in Ananth's team.
If you have more than 10,000 cleared BS, the benchmark result would more accurate.
Kind regards,
Yasuko
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
11 | |
3 | |
2 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.