Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
BhagavanChukka
Explorer
In many of our integration projects, we often encounter a challenge when processing large amounts of data, such as 100,000 records, to be sent to SAP using the BAPI for synchronous processing. The problem arises when the CPI is waiting for a response from SAP. Sometimes, the CPI connection breaks, resulting in message failures. Additionally, there are cases where the connection breaks while fetching the 100,000 records from the source system.

Use Case:

I want to share an interesting and challenging integration case that we recently encountered. In this case, we had a complex integration between a planning tool, and SAP. The goal was to seamlessly transfer a large amount of data, around 50,000+ records, from Planning tool to SAP through CPI using the BAPI for synchronous processing. However, we faced a significant hurdle during the process.

The Challenge: Connection Breaks and Message Failures

As we started processing the huge amount of data, we noticed that the response time from SAP was taking longer than expected. Unfortunately, this delay often led to connection breaks and message failures within the CPI. This was a critical issue that needed to be addressed to ensure the successful completion of the integration.

The Solution: Implementing Pagination

To overcome this challenge, we implemented a pagination method that allowed us to process the data in smaller, manageable chunks. This approach involved extracting the data from planning tool with a specified page size, such as 100 or 200 records, and applying pagination based on the total number of records. By breaking down the data into smaller chunks, we were able to process them individually and collect the BAPI responses at the end using a Gather step.

The Process:

Let me walk you through the steps involved in implementing this pagination method:

  1. Getting JSON Request Payload: We started by retrieving the JSON request payload from the planning systems.


 

  1. Building Pagination Approach: Using the total row count and the page size from the request JSON, we built a script that implemented the pagination approach. This script helped us divide the data into manageable chunks for processing.


 

 

 
import com.sap.gateway.ip.core.customdev.util.Message;
import java.util.HashMap;
import groovy.xml.StreamingMarkupBuilder

def Message processData(Message message) {
//Body
def body = message.getBody();
/*To set the body, you can use the following method. Refer SCRIPT APIs document for more detail*/
//message.setBody(body + " Body is modified");
//Headers
// def headers = message.getHeaders();
// def value = headers.get("oldHeader");
// message.setHeader("oldHeader", value + " modified");
// message.setHeader("newHeader", "newHeader");
//Properties
def properties = message.getProperties();
totalRowCount = properties.get("TotalRowCount");
pageSize = properties.get("PageSize");



def xmlBuilder = new StreamingMarkupBuilder()
def xmlString = xmlBuilder.bind {
root {
for (int i = pageSize; i <= totalRowCount; i += pageSize) {
row {
PageSize(i)
}
}
}
}

// def xmlString = xmlBuilder.toString()

message.setBody(xmlString.toString())

message.setProperty("paginationXML",xmlString.toString());
// message.setProperty("newProperty", "newProperty");
return message;
}

 

 

  1. Forming the Payload: After executing the script, we formed the payload that contained the data chunks ready for processing.

  2. Pagination Process Call: Once the first chunk was processed in the initial process call, we moved to the pagination process call. Here, we had a router that checked whether the pagination logic was applied or not.

  3. Splitting Rows as Chunks: We then used a splitter to route each row as a separate chunk, ensuring that the data from was processed in smaller, more manageable portions.

  4. Fetching Data from PlanningTool APIs: To fetch the data from PlanningTool APIs in chunks, we generated another request body specific to this purpose.

  5. Calling PlanningTool APIs and Mapping to SAP BAPI structure: With the data in hand, we made the necessary API calls to PlanningTool, mapped the data to the SAP BAPI structure, and sent it to SAP for processing. We received the responses back for each chunk and gathered them in a Gather step.


 

By implementing this method, we successfully addressed the challenge of connection breaks and message failures while processing huge amounts of data in CPI. This approach allowed us to ensure a smooth integration even when dealing with large datasets.

I hope this use case and the solution implemented provide you valuable insights into overcoming similar challenges. If you have any further questions or need more details, please feel free to ask!

 

References:

https://help.sap.com/viewer/368c481cd6954bdfa5d0435479fd4eaf/Cloud/en-US/148851bf8192412cba1f9d2c17f...

 
Labels in this area