cancel
Showing results for 
Search instead for 
Did you mean: 

Workflow - Start Condition Failure

Former Member
0 Kudos

Hi All,

I am trying to put a start condition on a workflow of mine. I just want to check if an order type = PM01 or PM03 and not 0010.

I have my start condition set-up but every time I run it I get this error in my event log:

Operator 'EQ': The value of the left operand cannot be determined


  (  " PM01 or PM03 
     &Maintenance order.Order type& = PM01 
     or    &Maintenance order.Order type& = PM03 
  ) " PM01 or PM03 
  and    &Maintenance order.Plant& u2260 0010 

The funny thing is if I go to SWUE and kick off the even it passes the start condition fine. So therefore I think there is an issue with my container being passed to the start condition.

Is there anyway I can fix this?

Many Thanks For Any Help Provided,

Colm

Accepted Solutions (0)

Answers (7)

Answers (7)

Former Member
0 Kudos

Hi All,

Apologies for the delay in response, I was on my vacation. While I was away one of my colleagues took on this issue and logged a support call with SAP. It turns out we had to implement SAP Note 1179350. Apparently this note has a fairly big span of affect so we are currently QAS testing through all Functional Areas in QAS.

I will mark the original question as resolved but please feel free to continue the discussion which although has deviated slightly is still relevant.

Thanks for all suggestions,

Colm

Former Member
0 Kudos

Hi Mike,

Here is the setup in BSVW:

ORI BUS2007 RELEASED Released

and the status restriction is this:

I0002 REL Released

Thomas,

I know what you are saying about the event triggering before the information has been written to the database and I am pretty familiar with the concept. What I can't understand is that, even if the information isn't written to the database, why is the container still not populated?

I would have thought that the container get's passed to my event, my start condition checks the container, and the workflow proceeds based on the value in the container.

I understand that in real terms, my event is kicked off (lightnening) and then the information gets written to the database (thunder). But I would have thought that my container is available in the lightening so why shouldn't the check work.

Thanks again for all your help!

Colm

pokrakam
Active Contributor
0 Kudos

>

> Lightening is the Event. Thunder is the Update.

>

> You see the Flash of the Lightening before you hear the Sound of the Thunder.

An interesting analogy, and one that would apply if events are not correctly set up. In theory, events are exactly the mechanism by which we can get around the update delays because they should be designed to be fired right at the end or together with the final update - i.e. events should be part of the thunder, or even an echo of it.

From the [SAP Doco on Change Documents|http://help.sap.com/saphelp_nw70ehp1/helpdata/en/cc/d40b37da4de72fe10000009b38f889/frameset.htm]: The system then creates the event whenever a change document is written for the change document object. The change document is written when the change is updated. The procedure described, putting the event after the logging, ensures that the event is not created until the change has actually been made.

The event-before-DB-update situation is often the case with custom events raised incorrectly in code or userexits, but should not happen when using standard methods such as status management. Note I say should... and if it doesn't then it should be a case for OSS.

>

> Hi Mike,

>

> Here is the setup in BSVW:

> ORI BUS2007 RELEASED Released

>

> and the status restriction is this:

> I0002 REL Released

In this instance, using config to raise events based on SAP system statuses there should definitely not be any update issues. In theory anyway. I would suggest testing this in isolation: Create a small workflow with the order as an importing container element and trigger it using the same event. If the object arrives in the WF then the start condition is a case for OSS. One exception would be if you are using custom fields which are being updated in an unusual way, but this is obviously not the case for a document type.

former_member185167
Active Contributor
0 Kudos

Hello Colm,

You say:

"I would have thought that the container get's passed to my event, my start condition checks the container, and the workflow proceeds based on the value in the container."

As Thomas said, not everything gets placed in the container; in the case of a BOR object only the type (eg BUS2009) and the key make it into the container. These are then used later to fetch the attributes. So it makes a big difference if the start conditions are based on key or non-key fields and the data hasn't been saved yet.

But generally, as Mike said, this isn't a problem in standard SAP, it's usually only when it's been custom made and someone forgot a commit somewhere.

regards

Rick Bakker

Hanabi Technology

Former Member
0 Kudos

Mike,

Your intuition is incredible. My experience came from debugging through SD code and learning that the event was indeed being triggered in a user-exit within subroutines of SAPMV45A/MV45AFZZ.

So that I don't muddy the waters of this thread, let me ask all readers to set my contributions aside as sort of an addendum to the thread.

I on the other hand am going to take this opportunity to familiarize myself with BSVW.

Thanks for your contribution Mike.

Regards,

Thomas Carruth

Former Member
0 Kudos

Mike,

I think you're on to something with the "echo" addition to my analogy. Suppose the SD code looks like this pseudo code


CALL FUNCTION 'UPDATE_SD_DOC' IN UPDATE TASK   *pseudo FM name.
CALL FUNCTION 'WRITE_CHANGE_DOC' IN UPDATE TASK *pseudo FM name.
CALL FUNCTION 'SWE_EVENT_CREATE_FOR_UPD_TASK' *actual FM name.
COMMIT WORK.

The effect of the COMMIT WORK will be to hand-off all three FM's above to the Update Process. The Update Process in turn executes the FM's in the order they were called originally.

A COMMIT WORK would again have to be called at the end of the update in the Update Process in order to commit the true application data.

Is it possible that the Thunder/Lightening/Echo concept is happening within the context of running in the Update Process?

And what way might system performance and load be contributing to the problem. What load is the Update Process under? What seemingly worked fine in a development system starts to act the way Colm is describing only when transported and executed in Production.

Ok, I think I'm done.

Edited by: Thomas Carruth on Mar 27, 2009 7:06 PM

Edited by: Thomas Carruth on Mar 27, 2009 7:06 PM

Edited by: Thomas Carruth on Mar 27, 2009 7:07 PM

pokrakam
Active Contributor
0 Kudos

Hi Thomas,

Your earlier explanation is perfecty valid and is a common problem people have with workflow events. However I'm not sure why you say that another COMMIT WORK would be required? The effect of the COMMIT WORK is to submit all the queued updates as a single LUW which either succeeds or fails. Raising the event is part of the LUW, so it too is subject to a rollback if anything in the LUW fails.

On a more technical level, this means that from an outside perspective nothing happens until the transaction is successful. While a LUW is in progress (you could be updating a million records in a single transaction), everything else sees the old data. Oracle calls this read consistency. Once the system has determined the transaction was successful the COMMIT is complete and the new state of all records becomes visible to the rest of the system. This sort of mechanism makes a change appear instantaneously - in this case our database update and publishing the event.

Thus the events are published during the update, but the event receivers are called using tRFC, which is an asynchronous process. I may be wrong here but it is my understanding that this tRFC "becomes visible" in the RFC list and is processed upon successful completion of the LUW, i.e. after the instance at which the DB has been updated.

That's how a simple update task should work in theory, but in practice it gets more complicated. Not only do we have update tasks but we can also submit something in a background task or via RFC. They transaction such as the sales document would typically have a V1 and V2 update. Most likely the user exit will be processed during the V1 update, so raising an event will call the event receiver before the V2 update has started. Raising the event in a user exit using an update task and adding a COMMIT WORK statement will have the same effect because this will have it's own update task that is likely to execute before the transaction's V2 update. For this reason most user exits / BAdIs should not contain any commit statements, because this would cause the lightning/thunder effect you described - you would effectively break the transaction into two separate update tasks. The solution here is to raise the event in an update task but without any COMMIT statements - this would then be executed together with the transaction's own update task.

That's all theory, who knows what weird and wonderful coding SAP may have in place just to be different.

Cheers,

Mike

Former Member
0 Kudos

Hi Mike,

I like the direction you're going but first let me thank Colm for letting us have this dialog. While we may not be answering his question directly, I believe this topic is a very important to anyone who works with SAP Workflow.

Mike, I'm with you on your explanation and essentially had the same understanding myself. Your point about the Event Receiver being run in tRFC is quite true and that it is not seen by the queue until the LUW that called it completes and commits (explicitly or implicitly). The RFC call actually just registers the FM and Parameters in the ARFCSTATE (and other ARFC...) tables, which in itself is just data. The registration data has to be committed before the queue sees it - again to your point about read consistency.

Back to BSVW. I'm aware of it, haven't used it, but I have implemented Change Documents for a "bolt-on" that stored complex approval rules for FIPP. The business wanted to know when and who changed a rule so CD's was the route I took capture that info.

Ultimately, I had to place the call to the Change Document Function Modules in the logically correct position of the Z-program (actually a Function Group).

Do we know if SAP's call to create the CD's for the Maintenance Order is positioned correctly within the SD code? Maybe that's what needs to be examined, which get's back to an earlier point of yours that this is becoming OSS note territory.

OR

Maybe we need to ask Colm if a COMMIT WORK is being issued in any of the SD User Exit Subroutines effectively breaking the LUW in the manner you just described above.

Regards,

Thomas Carruth

Former Member
0 Kudos

Mike - I didn't answer your question to me, so let me try now.

Your earlier explanation is perfecty valid and is a common problem people have with workflow events. However I'm not sure why you say that another COMMIT WORK would be required? The effect of the COMMIT WORK is to submit all the queued updates as a single LUW which either succeeds or fails. Raising the event is part of the LUW, so it too is subject to a rollback if anything in the LUW fails.

I see it as the first COMMIT WORK submitting the updates to the Update Processor. Just that - a submission of work to do. The Update Processor is an entirely separate running process on the Application Server. I like to think of it as just another user on the system.

With the update registrations committed, only now does the Update Process see them and subsequently starts updating. (If it fails we see the failure in SM13). Now within the Update Process itself, I believe there has to be a COMMIT WORK (#2) that effectively commits the "INSERT ... INTO VBKP ..." and others.

Doesn't this get at the heart of the SAP LUW. The updates from COMMIT WORK #1, which in itself is a physical DB transaction plus the updates from COMMIT WORK #2, another physical DB transaction, constitutes the Logical Unit of Work.

That's why I believe two COMMIT WORK are needed - One at the end of the Dialog Process, and another at the end of the Update Process (it may be built into the Update Process that a COMMIT WORK [#2] is issued at the completion of all the updates - not sure).

Regards,

Thomas Carruth

Edited by: Thomas Carruth on Mar 31, 2009 9:00 PM

pokrakam
Active Contributor
0 Kudos

Hi Thomas,

Another great explanation, and agree it's useful for people to understand how this works.

>

> I see it as the first COMMIT WORK submitting the updates to the Update Processor. Just that - a submission of work to do. The Update Processor is an entirely separate running process on the Application Server. I like to think of it as just another user on the system.

>

> With the update registrations committed, only now does the Update Process see them and subsequently starts updating. (If it fails we see the failure in SM13). Now within the Update Process itself, I believe there has to be a COMMIT WORK (#2) that effectively commits the "INSERT ... INTO VBKP ..." and others.

I think you are either confusing CALL FUNCTION ... IN UPDATE TASK with CALL FUNCTION IN BACKGROUND TASK, or confusing SAP COMMIT's and database COMMITs (which aren't used in ABAP). Both are very different, and a second COMMIT WORK could be used in the case of background updates since they are called using RFC (effectively another user as per your explanation).

Update tasks are a lower level process and form part of the same LUW. V1 update happens in the same dialog session and V2 is a lower-prio update that is deferred but belongs to the same LUW. It is possible to crash a V2 update and end up reversing the V1 update. The system will send the user a mail when it happens - they will have e.g. gotten a message "Sales Order 9999999 has been created" and it then no longer exists.

The doco on this explains it very well:

[http://help.sap.com/saphelp_nw70ehp1/helpdata/en/e5/de86e135cd11d3acb00000e83539c3/frameset.htm]

Cheers,

Mike

pokrakam
Active Contributor
0 Kudos

Hi Gavin,

The object should be populated if you are raising the event in status management, so I would say a start condition is the right way to go. Status management events are raised in the update task which should guarantee that the DB update is complete, just like change documents.

I would look at the BSVW config, or it could even be an OSS issue. What status object type and status are you using? System or user status?

Cheers,

Mike

Former Member
0 Kudos

Hi All,

Yes Thomas you are correct, when we are testing in SWUE we are testing with an already created order so I can see your point about the timing issue.

However I don't think this should be the problem because we are passing a container into our start condition and it is the container that is being checked against. So even if the document hasn't been created in the system yet, surely this container should still have the values in it to validate against?

Or am I completely wrong?

Former Member
0 Kudos

Hi Colm,

True, you are passing Container Elements, one of which appears to be a Business Object Reference.

The Start Condition upon receiving that reference first has to Instantiate the reference. To do that it has to read from the Order Table. But guess what - the data for that Order Key does not exist - yet.

The solution proposed above by Arghadip is a very good one and you might want to give it strong consideration. And yes, it may result in the creation of unnecessary workflow instances, but it should resolve your immediate issue.

In tandem, consider running SWWL to delete those irrelevant workflow instances.

Regards,

Thomas Carruth

Chief SAP Workflow Consultant

SBWP Service Corp

Former Member
0 Kudos

Why wouldn't the data exist in the tables?

Because the SQL Inserts are called in an Update Function Module, which in turn is passed to, and performed by the Update Process, which is an entirely separate Application Server Process.

By the time the Update Process gets to start the update (very fast), the triggering event had already fired. Hence the Start Condition finds itself working with a logical reference that has yet to be physically created in the DB.

Here's the most important thing to understand.

When you save the Order, the corresponding COMMIT WORK does not write the actual order data to the tables. It writes data into system tables that registers the data needed by the Update Process to write the Order Data to the tables.

The Update Process reads the registered data, which is actually all the parameters needed by the Function Module that was called "IN UPDATE TASK" much earlier when you saved the order.

This all has to do with the SAP Logical Unit of Work (LUW) and the Update Process.

To add anymore on the subject is probably outside the general scope of this forum, though if you are going to be developing workflows on a regular basis, it makes for good bedtime reading to learn as much as you can about the the SAP LUW, Update Tasks, and additionally, the tRFC Queue.

Regards,

Thomas Carruth

Chief SAP Workflow Consultant

SBWP Service Corp.

Former Member
0 Kudos

One last thought...

Think of this all as "Thunder and Lightening". Better yet, "Lightening and Thunder".

Lightening is the Event. Thunder is the Update.

You see the Flash of the Lightening before you hear the Sound of the Thunder.

Regards,

Thomas Carruth

Chief SAP Workflow Consultant

SBWP Service Corp

Former Member
0 Kudos

Thomas is correct. The timing effect is the issue there. In this case it is better to evaluate the doct type inside the Wflow rather than outside.

Thanks

Arghadip

Former Member
0 Kudos

I've had issues with triggering events and executing start conditions upon the creation of SD documents.

Judging from the error message, I believe the problem is related to the event being triggered and the start condition being evaluated before the order is committed to the database. The Maintenance Order business object is probably NULL within the context of the start condtion evaluation.

When you're testing in SWUE, I presume you are using a pre-existing order, which essentially eliminates the timing effect of the real transaction and event triggering.

You may need to replace the Start Condition with a Check Function Module and even with that there are issues.

So I don't waste anymore of your time, could you confirm whether or not this problem occurs upon document creation.

Regards,

Thomas Carruth

Chief SAP Workflow Consultant

SBWP Service Corp.

Edited by: Thomas Carruth on Mar 26, 2009 8:51 PM

Former Member
0 Kudos

I have been doing extensive debugging and it looks like the problem is the way this event is kicked off. The event is being triggered by config set up in the BSVW transaction.

Has anyone experience in calling events from this config?

surjith_kumar
Active Contributor
0 Kudos

Hi,

In the BSVW, the mapping of the event is not correct. Check the setting once again.

Refer the 5 in this [blog|https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/7219] [original link is broken] [original link is broken] [original link is broken];.

Also Refer the [Link|https://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/e038cc2d-0cde-2a10-e28e-f50025578112].

Regards,

Surjith

Former Member
0 Kudos

Surjith,

I found your documentation very useful although there didn't seem to be any information in it regarding why a container might not be populated when calling these events using BSVW?

Does anyone know if it is possible to debug a BSVW event trigger to make sure that the container is populated correctly when triggering the event?

Many Thanks for all your help,

Colm