Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Robotic process automation (RPA) is the backbone for automating your business processes. SAP intelligent robotic process automation provides the framework and tools to automate your processes end to end. To know more about SAP Intelligent RPA visit the official page "Intelligent RPA".

In this blog post, I am going to touch upon the following,

  • What is SAP Intelligent RPA Surface Automation

  • Some of the Existing Features

  • Exciting new Features

  • What's Coming Next


SURFACE AUTOMATION


Surface Automation was first released with the "1910 release"

Surface Automation means Automation of UI using visual elements such as labels, controls, images, and icons i.e., "stay on the surface" as opposed to using DOM elements {IDs, Class, Tags, etc.} or application-specific APIs.

Why and when to use Surface Automation

  1. To automate legacy applications where native automation technologies don’t exist or are very niche.

  2. To automate use cases where an image/icon needs to be recognized.

  3. To automate applications running inside a virtual machine(VMs)/Citrix environment where the agent cannot be installed.


Technology Used

  • Machine Learning

  • Optical Character Recognition

  • Image Processing Algorithms


Existing Features


1. Label extraction [ available since 1910 release ] -This feature enables the bot developers to select the visual labels [texts] on the page as an item and then the item can be associated with the actions {click, set, etc.}. Let me explain this feature with a use-case, let's say you want to create a bot which will log in to SAP application which is inside a Citrix environment. In this use case, you can not access the underlying APIs or DOM attributes of the login application because of VM constraints.  [see below Image - all red boxes are the recognized texts/labels and actions are also associated with the corresponding item/label].

2. Template Matching [ available since 2002 release ] - This feature enables the bot developers to select a template {icon, logo etc} as a zone and create an item out of it. Behind the scene [the algorithm], there will be pixel to pixel comparison between the template and the application's screenshot and the highest matched region will be returned as the recognized template. This template is like any other item where you can performer actions {click, set, etc.}. [see below Image - all red boxes can be created as a template]

New Exciting Features


these features will be released with the "2004 release"

1. Multi - Template Matching


Enhancement of Template Matching feature

Multi-template matching, as the name suggests, is an enhancement of Template Matching. It enables the BOT to detect a template uniquely when there are multiple occurrences. In the below image, there are two occurrences of the "search" field { inside the yellow boxes } and two occurrences of the gear icon {inside the red box}. So in both cases, the template matching will work poorly as it does not support this use-case.


So to extend the support for these use-cases, we have implemented a new algorithm the "multi-template matching", it's implementation considers the items around the templates [below figure -highlights with green boxes] and asks the bot developers to label the template with the nearest item. This gives the flexibility to the bot developer to choose the "label by" item based on the application's run time behavior.


Let's say the bot developers want to recognize the second gear icon uniquely and to do so, they should follow these three steps
 I.  Create a template by selecting the area of the gear icon.

II.  Select the nearest text and create an item.

III. Label the template with the item.

See the below image for the visual illustration.


The leftmost part - a template [red box] is created and labeled by the nearest item "Status" [green box], The Middle Part - Distances [d1,d2] of the templates are measured from the items [green box] and The Rightmost part - template having the minimum distance [d1] is returned [green box].

2. Detect correct text from multiple matches


Enhancement over the label-extraction capability 

The existing implementation of label-extraction, uniquely identifies the words/labels having a single occurrence on the page and it does not support when there are multiple [more than one] occurrences of the words/labels.

In the below image, if you focus on the texts "Create" and "Search" on the page, there is just one occurrence of each word. Here the label-extraction will uniquely identify each word.


But when there are multiple occurrences of the words "Search" and "Create" [below image] label-extraction will always pick the top-most word and this will lead the bots to act in ambiguity as this is an unsupported use-case. So to tackle these challenges, we have introduced the support for these use cases.


The Implementation of "Detect correct text from multiple matches" will help the bots to uniquely identify the design time items [texts] at the run-time even in the cases of multiple occurrences of the items.

This implementation gives the flexibility to the bot developers by asking the expected change in the text's location along height {y-axis} and/or width {x-axis} at the run time.

There are two ways, A bot developer can use this feature

I. Using design-time Co-ordinate -

At design time, when you create an item by selecting a text {label-extraction output} on the page, the associated co-ordinates will also be captured and if the bot developer decided to use this feature, they will have to provide the expected changes [percentage change] along the width and/or height {parameters enabled inside the studio}.

At run time, the text will be returned which is within the specified area and also at the minimum distance from the design-time co-ordinate.

In the below image, the text "search" is the target item, {expected % change along the height} and w {expected change % along the width}.

At the run time, To uniquely identify the target text "search" the area highlighted with blue color will be scanned and if the text lies inside the area hence identified.


steps to follow inside the studio to use this feature [below image].

step 1. create an item by selecting the text and then "right-click + Associate to New Item"

step 2. Verify the associated co-ordinate {X, Y} of the text. [never remove these values]

step 3. Provide the expected change value along with the height and/or width {h and w values are the percentage of change and it ranges from 0 to 100 }.


II. Using label by an Item - Similar to the first approach but instead of design-time co-ordinate as the reference point, this approach uses the offset distance from the center of "label by item" as the reference point.

The offset distance is the distance between the center of the target item and label by item.

In the diagram below, the text "create" [yellow box] is the item and the nearest text "status" [green box] is the label by item.

The reference point, or midpoint of the blue search area, is the offset distance from the center of " labelled by item" Status and the Offset distance is the distance between the center of the target item Create and the item chosen as its label, Status. Once again w and h are the expected percentage width and height deviations, respectively

The center of the look-out eclipse is the reference point. w and h are the inputs provided by the bot developer at the design-time.


Steps to follow inside the studio to use this feature [below image]

step 1. Create an item by selecting the text.

step 2. Create the label by item by selecting the nearest text from the item.

step 3. Verify the label by item.

step 4. Provide the expected change value along the height and/or width {h and w values are the percentage of change and it ranges from 0 to 100 }.


The required steps are well explained in the below animation.



 

What's Next: Stay Tuned for Deep Learning Enabled Object detection capability inside the Studio.

 

Please like, comment and share if you found this helpful.

Thank you for your time.

Happy Bot Building 🙂
2 Comments