on 01-19-2017 7:11 PM - last edited on 02-04-2024 3:04 AM by postmig_api_4
Hi all,
We have a datahub server instance where publication was working until today. Today we noticed that the items are not being sent in hybris any more. We made a call to /target-system-publications endpoint in datahub and most of the items (14 from 15) were in PENDING status. Previously we had some items which generated an error during publish (2 items).
We were trying to replicate b2b customers when encountered this issue.
We are using datahub version 6.0.0.
Any advice to overcome this issue it will be helpful. Thanks in advance.
Alin,
it's hard to guess what's going wrong. However, you're saying 14 out 15 publications are in PENDING status, so what is the status of the 1st one? Is it in IN_PROGRESS status? If so, then you can reformulate the problem to why a publication is stuck in IN_PROGRESS status. No, other publications can be kicked off in the same data pool while previous publication is not finished. So, that explains the PENDING status of the other publications.
As to the IN_PROGRESS publication most likely datahub-adapter on the hybris platform side failed to report completion of the publication back to DataHub. Check the platform log and see, if there are exceptions indicating a problem there.
To terminate that stuck in IN_PROGRESS publication, you can send a PUT request to /core-publications/{publicationID} on the DataHub with
{
"crashReport" : "terminated",
"exportErrorDatas" : []
}
in the body.
After that the PENDING publications should start processing. If this problem becomes frequent, you can develop a DataHub extension, which keeps track of current publications and, if a publication runs for unacceptably long time, terminates it. This feature will be available in DataHub 6.4 but for 6.0 it's going to be a custom development.
Hope it helps.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You mentioned "After that the PENDING publications should start processing. If this problem becomes frequent, you can develop a DataHub extension, which keeps track of current publications and, if a publication runs for unacceptably long time, terminates it. This feature will be available in DataHub 6.4 but for 6.0 it's going to be a custom development."
Is this feature available from 6.4 onwards? Is it a seperate extension?
What about terminating a publication if it is in progress for too long. Is it handled in the datahub-cleanup extension? We are facing issues where publications are stuck in progress for long. Sometimes there is no corresponding impex-import job running in Hybris as well. Terminating a publication isn't working in 6.6. It says publication is either pending or in progress and all further actions have been terminated. Neither the publication is terminated nor the other pending publications resume until we do a restart. Any help/suggestions to handle this scenario?
DataHub performs progressive retry to connect to a target system during publication, so, if the target system is not available datahub.retry.initial.interval.millis
sets how soon first re-attempt to connect will be made. Then if the connection failed again,it doubles the interval and attempts again. It does so until datahub.retry.max.interval.millis
is reached
For what you need read Monitoring Long Running Publications
Thanks, that was really helpful. However I configured the following properties to test this on my local. Notice that after 1 minute publication does timeout but status isn't changed to 'failure'. It continues to remain in 'pending' state. So any other feed that belongs to this pool is still stuck, waiting for the pending publication. Any thoughts?
datahub.publicationmonitor.enabled=true
datahub.publication.monitor.interval=60000
datahub.publication.timeoutinminutes=1
Jahan,
it's hard to say what's the problem. I would recommend the usual steps:
Make sure pub-recover.jar is on the DataHub classpath
Make sure it's loaded ad DataHub startup (search for 'Loading extension pub-recover' in the log).
Examine the log for other clues of why the feature did not work (messages, stack traces, etc). You should see "Checking for IN_PROGRESS publications" every minute in the log.
I have a simiar issue, I have two data pool, one for customer and other for orders. The customer pool is running correctly but the order pool is stucked. I check on http://datahubserver/datahub-webapp/v1/target-system-publications/ and I only see the customer pool publications, if I go to http://datahubserver/datahub-webapp/v1/pools/ORDER_INBOUND_POOL/publications and all of them are in PENDING status. I checked and pub-recover extensión is loaded correctly (Data Hub version: 6.6.0.3-RC1). Any idea what can it be? Thanks
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
If they're all in PENDING status, most likely the auto-publication is not configured properly and for that reason after composition the publication does not start. See, if this document will help you https://help.hybris.com/1808/hcd/7cb1b38932bd4da0a4f02c5ccdaad0ce.html Specifically look at specifying target systems for the ORDER_INBOUND_POOL
User | Count |
---|---|
7 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 | |
1 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.